.topic 2 Wherever we look, we encounter virtual worlds. Whether in Hollywood or daily advertisements, the world that is presented to the viewer will be a mainly artificial creation. At last, you too can create these surreal worlds at home, practice inside or outside architecture, design products or promotional logos and films or build your own virtual worlds using the build in landscape designer and atmospheric background models. All the modules required to generate, model and manipulate, animate and represent the three-dimensional world are integrated into one program: i.e. all actions are controlled directly by CyberMotion. There is a choice of several rendering algorithms, from simple scanline (depth buffer) to up-to-date global illumination rendering methods using raytracing and photon mapping. The possibilities to define object surfaces, light conditions and backgrounds are almost inexhaustible - visual libraries and fast preview windows in the main dialogs invite to experiment and play around with this vast functionality. Complex 3D animations can be set up very easily - you can animate all objects, camera and light settings, backgrounds, fire, water and even clouds and fog in the atmosphere. You can arrange objects in hierarchies to animate jointed models or robots and easily position joints with the help of Inverse Kinematics. And with the up to date Skin and Bones technology even character animation is available with CyberMotion. Call your own heroes and monsters into being, CyberMotion gives you all the tools you need to do it. And all this vast functionality comes to you at a sensational price. PC-Games Hardware (3D-Special Edition, 01/04) tested it and came to the conclusion: "Cost-Performance-Ratio: Excellent". They commended CyberMotion for the "...solid and user oriented concept..." and the "...intuitive interface." Where to start from? A program with this vast functionality does not open up to oneself in a single day but this extensive manual will hopefully give you a good start on exploring the depths of CyberMotion. You should begin with the chapter "A little introduction to the world of 3D", then go on with the workspace overview and continue working through some of the tutorials. Context-sensitive help (F1-Key) Context sensitive help can be obtained at any time pressing the F1-key. If, for example, you are in the "Rotate Object" menu and you press the F1-key, the help window with the topic "Rotate Object" opens automatically. The same goes with all open dialogs - simply press F1 and the corresponding section of the dialog automatically appears in the help window. Print manual For registered users the CyberMotion help is also available as a PDF document (about 356 pages, 11mb download). The PDF document is structured similar to the help file, for instance, you can click on links or topics in the table of contents to get forwarded to the referred topic. But in addition you can print the PDF file in a coherent manner with a complete table of contents and extensive keyword index. Visit the 3d-designer web page to download the document: http://www.3d-designer.com/en/download/download.htm CyberMotion® 3D-Designer © 1995-2005, Reinhard Epp Software The English manual was translated by John Ridgway .topic 250 What registering will get you A registration code to: - remove all the watermarks copied in every rendered picture. - enable the export functions (CMO, DirectX, 3DS, VRML 2.0, DXF, RAW) Free access to the 3D Online Library at www.3d-designer.com The more people who register this shareware, the more incentive to make CyberMotion even better. How to register Before registering, write down the serial number that is displayed in the info screen right after starting the CyberMotion program. This number is required in the registration process. Without this number no registration code can be generated. You can call up the info screen any time by selecting the "Help - About CyberMotion 3D-Designer" entry in the menu bar. Then just click on the "Register Now" text link provided in the info screen to set your internet browser to the CyberMotion order page. Or just use this direct link here instead: www.3d-designer.com/en/order/order.htm We use SSL (Secure Socket Layer) encryption technology to transfer your data securely. Our encryption is certified by VeriSign. There are various payment options, such as Credit Card, Debit Cards from Switch and Solo (UK), Bank/Wire Transfer and payment by Check or Cash. When you pay by Credit- or Debit Card, your registration code will be provided immediately after placing your order, otherwise the product is delivered after receipt of funds. Detailed help regarding the different payment options is provided during the registration process. Entering the registration code After placing your order and receiving the registration code call up again the CyberMotion info screen. Select the "Enter Registration Code" button and enter your registration name and code to unlock your copy of CyberMotion 3D-Designer. No more watermarks will be copied in rendered pictures and the export functions will be enabled. What can I do if I loose my registration code or when installing CyberMotion on another computer? As a registered user you can ask any time for additional free registration codes for your registered version. Just send a short email with the new serial number displayed in the info screen when starting CyberMotion on a new system configuration. Registration Fee *Orders from the EU include 16% VAT Student Version - 30% Discount for Students Please first send a scan of your student's identity card via email (support@3d-designer.com) or fax (+49-721-151-559276) and you will receive a special link to order the full version (no restrictions) at only USD 89.00 / EUR 89.00. .topic 620 Here comes a short summary followed by a more detailed specification of all new functions and expansions to the new version of CyberMotion 3D-Designer 10.0 program. The elimination of minor errors and repairs are not mentioned here. Please Note: CyberMotion 3D-Designer v. 10.0 will read all files of older versions, but, because of the many changes in the animation system, it is not possible to convert old animations to match the new system 100%. All positional and rotational changes will be converted correctly, but if scaling is involved the imported animation has to be revised. V.10.0 files on the other hand will not load into older versions due to the general changes in the animation system and the expanded world dimensions. New features in version 10.0: Animation - Work on a project is now divided into Modeling Mode and Animation Mode - Hierarchy-Independent Animation - Character Animation using the Skin and Bones technique Animation Editor - Separate timelines (tracks) for all objects and features that can be animated - Cut, Copy, Paste - Animation sequences can be cut, copied and pasted to and from the clipboard - Relative Copy of movements from object to object - Multi Paste for animation loops - Editable Duration of Animation - A selected frame range in a timeline can now be moved freely with the mouse - Acceleration between keyframes can now be controlled for all individual tracks - A separate Undo/Redo-function undos all operations in the animation editor - Field Rendering for interlaced video Navigation Button-Strip - The navigation button-strip is now accessible only in Animation Mode - The button to call up the animation editor has been integrated into the navigation button-strip - A new slider is provided with which you can scroll through the animation - New record button for manual creation of keys - New buttons are provided to force key generation for additional tracks and/or objects in a hierarchy Work Modes - World dimensions are expanded to 2^24 = 16,772,216 units - Work on a project is now divided in Modeling Mode and Animation Mode - Objects can be moved (scaled or rotated) with or without their movement paths - Group Objects are used to group together a number of objects - New snap functions will facilitate the positioning of the objects in the scene - Use the mouse wheel to zoom in and out of viewport- and camera windows - The window-detail can be moved now in any work mode with the mouse (press left and right mouse buttons simultaneously) - You can grab crosshairs now at the center arrows to move them around - The undo and redo functions have been optimized - Many more new and revised functions (see following detailed description) Viewport Render Engine - The CyberMotion viewport render engine has been revised and optimized, transparencies are now also shown in the viewport windows Raytracing Render Engine - Up to 250% faster rendering of complex scenes containing high resolution models Background - Rainbows - Clouds - Condensation Trails - Weather machine - A post processing particle effect simulates rain (stripes) or snowflakes or, e.g., floating particles in the water. - The starfield generator has now been integrated into the atmospheric background model - star intensities can be animated Material - The Texture Blur function blurs procedural texture patterns with increasing distance to reduce noise in the distance - Mip-Mapping generates additional bitmaps resized and filtered to lower resolutions to reduce texture noise from bitmaps in the distance - Bitmaps can now be tiled with a given number of repetitions and - New visual project library for the project management - You can change now the root folder for libraries - Drag and drop of CyberMotion project files directly from the Windows Explorer - Lensflares with rainbow effect - New shortcuts for all important dialogs are provided - Customize Dialog - a new dialog for general program settings - Object Selection Dialog - Double clicking on a plus folder icon will expand the whole selected hierarchy branch - More precise and logical object selection in viewport windows - Still pictures in Animation Mode are rendered now with all effects incl. particles and motion blur Detailed description of the new features in version 10.0: Modeling Mode vs. Animation Mode Work on a project is now divided into Modeling Mode and Animation Mode. In principle there is no great change in the working process. You have the same working menus as before, except that some of the functions in Animation Mode are no longer accessible in Modeling Mode and vice versa. There are two prominent buttons at the top left corner of the CyberMotion window to switch between the two modes. In Modeling Mode all changes made to an object - e.g. the deforming of an object by working on individual points - are permanent changes of the object's shape while in Animation Mode every action is merely a transformation of the model data and can be undone at anytime by reversing the working steps or deleting the keys that were created automatically when manipulating an object in animation mode. Another Example: Scaling an object and its children in Modeling Mode will result in a permanent change of size of the model throughout the entire animation. If the children are deformed by this scaling this deformation will be a permanent change of the shape of the objects. If you scale an object and its children in Animation mode it will be only a temporary change of size. Moreover, the children will not be scaled at all - it is just their coordinate systems that are temporarily deformed by the scaling of their parents without influencing the children's object data. (Hierarchy Independent Animation, see below). At first glance there is no apparent difference, because the children are scaled with their parents anyway, but if you want to fully understand the underlying animation principles of Hierarchy Independent Animation, this is a very important factor. Animation Totally new programming underlying animation system Hierarchy-Independent Animation - All objects perform their movements in their own coordinate systems, irrespective of whether they are parents or children. Thus, the animation data can be copied as relative movement data to other objects or hierarchies. Alternatively, simply animate an object and link it afterwards to another object. It will still perform its own movements but it will also follow all movements of the new parent. Relative Movements, Scaling and Rotation. For instance, you can copy an animated walking character with its animation data, then move it in Relative Mode to a new position, rotate it in Relative Mode to let it face in another direction and then just let it go with the copied animation data in this new direction. Character animation with skeletal deformation (skins and bones). In the new Bones-working mode you can easily build chains of bones to create a skeleton. Linking a skeleton under a polygon- or NURBS-object will automatically convert this object to a deformable skin. Align the bones within their new skin and allocate the skin's points to the corresponding bones. If you then change to Animation Mode and move or rotate the bones of your skeleton, the skin will automatically be deformed by its bones. You can use bones to animate character movements, facial expressions, or, e.g., to animate the deformation of cloth. Deformable Textures - Textures are aligned and scaled in Modeling Mode only. When scaling or deforming objects or skins in an animation all textures will be deformed properly with the object. Material Animation - materials, including landscape textures, have their own animation track and almost all their parameters can be animated. Field Rendering for interlaced video - In regular TV interlaced video is used. An interlaced video picture contains two fields of picture information shot at different times. In the first shot the picture information is saved in all odd numbered scanlines (1,3,5...) and in the second shot all even numbered scanlines (2,4,6...) are saved in the same video frame. When playing this video on TV, both fields are played in succession to produce the interlaced TV-picture. So, when watching television you always see only one half the strips of a picture - it's the playing frequency and the luminous characteristics of a television screen that gives the impression of full frame pictures. If you plan to play your CyberMotion animations on TV you can now switch on Field Rendering for AVI-output, too. Since twice as many pictures (each of half resolution) are rendered, Field Rendering gives smoother motion and can even reduce or eliminate the need to render motion blur - which can save rendering time. Animation Editor Separate timelines (tracks) for all objects and features that can be animated. Keys will be created automatically (as in older versions when manipulating an object) but only for the corresponding object and the particular tracks involved. Rotating an object in a hierarchy for example, will only generate a rotation key on a rotation track for that particular object. Following tracks can be added: Position, Scale, Rotate, Parameter, On/Off, Deformation and Material. Cut, Copy, Paste - Animation sequences can be cut or copied to the clipboard and - after marking the destination selection - can be copied to another timeline position of the same object or any other object. Even the entire animation data of a character hierarchy can be copied this way, provided that the structures of the hierarchy trees of source and destination selection are similar. Absolute or Relative Copy of position and rotation tracks: - Absolute Copy - The destination object will move to exactly the same position as the source object and it will also rotate through exactly the same angles of its body axes. - Relative Copy - The movement vectors and angles as seen from the source object's local coordinate system will be applied to the destination object's local coordinate system. If, for example, a character is moving "forward" along its local z-body axis and this movement is copied in Relative Mode to the destination object, then the destination object will perform this movement along its own z-body-axis. It is much the same for rotations - the copied rotation data of the source object will be used to rotate the destination body about the axes of its own local coordinate system. Multi Paste - To loop an animation sequence just copy it with an editable number of repetitions to the destination position in the timeline – again either in Absolute or Relative Copy Mode. Duration of Animation - You can now change the duration of an animation by editing the number of frames or the time of the animation. A selected key or frame range in a timeline can now be moved freely with the mouse between its neighboring keys. Acceleration between keyframes can now be controlled for individual tracks, e.g., rotations - speeding up slowly, and decelerating again when approaching the destination angle. This also applies to other animation parameter, e.g., the fading of background colors or light intensities. Undo/Redo - All operations in the animation dialog can be undone immediately by a separate Undo/Redo function in the animation dialog. Of course, after leaving the dialog, all changes can be undone as a whole via the general Undo/Redo functions in the main button bar. Animation paths can now be drawn for all objects of a hierarchy. Navigation Bar The navigation bar is now located at the bottom of the main window and is accessible only in Animation Mode. The button to call up the animation editor has now been integrated into the navigation bar. In addition to the navigation buttons, an additional slider is provided with which you can scroll through the animation. (Click on the slider to activate it and use the mousewheel to scroll) Usually keys are created automatically when manipulating objects. An additional record button is provided in the navigation bar, so you can create keys on timeline positions where no object manipulation is intended (for instance, to isolate a specific object at the starting point for a planned movement from this timeline position). New buttons are provided to force key generation for additional tracks and/or objects in a hierarchy - for children only or for the whole hierarchy. This way, by manipulating an object or by pressing the record button, additional keys for all selected tracks and/or hierarchy objects are created. For example, if you animate characters, it is recommended that you activate key creation for the whole hierarchy tree, or at least for position and rotation tracks, so the exact alignment of object axes and position is saved in every timeline position. Example: You animate a walking sequence for a character. After creating the walking sequence you decide that you want the character to look up in the sky and, accordingly, you move forward in the timeline and rotate the character's head to look up. Now, if you play the animation, the character starts to look up into the sky from the beginning of the walking sequence instead of from the end of the walking sequence, since the head wasn't involved in the creation of the walking sequence and therefore no keys have been created for it. Work Modes World Dimensions - The world space is no longer limited to ±16.000 units. The work space now extends to a dimension of 2^24 = 16,772,216 units. Move - Modeling Mode - In Modeling Mode objects in hierarchies can be moved without their children. - Object axes can now be moved in this working mode, too. - You can now move objects along their object axes. Move - Animation Mode - Objects can be moved with or without their movement paths. Scale - Modeling Mode - Scaling in Modeling Mode changes the size of the object for the whole animation! - In Modeling Mode objects in hierarchies can be scaled without their children. - Reference Point of Scaling - If you scale along world axes, you can choose only one reference point of scaling (crosshairs or object center of the reference object) or you can decide to scale all objects/hierarchies from their own axes center or the topmost marked hierarchy object, respectively. If you scale along body axes the axes center of the topmost selected parent is always used as a reference point. - Additional Mouse Lock button (world axes) to scale symmetrically in a viewport plane. - Additional Mouse Lock buttons (body axes) to easily switch between scaling along individual axes or to scale in the corresponding planes standing perpendicular on those axes. - There will no longer be alerts if you try to scale and deform an analytical defined object. If the scaling operation can't be performed without deforming the analytical object, it will just be displaced by the scaling. Scale - Animation Mode - Reference Point of Scaling - Only scaling along body axes is available. The reference point of scaling is always the object axes center of the topmost selected parent. - Objects can be scaled relatively, so that the size of the object in all following keys is scaled with the same value as in the current timeline position. Rotate - Modeling Mode - In Modeling Mode, objects in hierarchies can be rotated without their children. - Reference Point of Rotation - You can choose to rotate all selected objects and hierarchies about one reference point (crosshairs in world axes mode or object center of the reference object in object axes mode) or you can decide to rotate all objects/hierarchies about their own axes centers or the axes center of the topmost marked hierarchy object, respectively. Rotate - Animation Mode - Reference Point of Rotation - In an animation every rotation of an object is performed about its own axes system or the axes system of a parent object, therefore no other crosshair position can be chosen as reference point for rotation. Objects and hierarchies are rotated always about the topmost marked hierarchy object. - Objects/Hierarchies can be rotated relatively together with their movement paths at any timeline point in the animation. Only the following key positions will be rotated, so you can bend the movement path at the current timeline position. - If you rotate an object then the shortest angle is always calculated to animate the rotation from the last key position to the current key position. Now an additional angle edit field is provided in which you can enter any angle you like. If, for example, you want a wheel to rotate several times then just rotate it with the mouse about the corresponding axes. The shortest angle to the latest key position will appear in the edit field. Now, just enter the desired number of rotations multiplied by 360° for a full rotation. - As mentioned above, the shortest angle is always chosen to rotate from one key position to the next, e.g. you turn an object clockwise through 270 degrees, in the animation it will turn 90 degrees anticlockwise instead. Now you could simply enter in the angle edit field the desired angle or just press the "Reverse Rotation" button instead - it will automatically calculate the reverse angle. Work Modes - Miscellaneous Group Objects - Group Objects are simply used to group together a number of objects by linking them under the Group Object in a hierarchy. Group Objects are only visible in the viewport windows and always hidden in the final rendering. You can also use Group Objects as reference points for rotating or scaling object groups in animations. If, e.g., you want to rotate a group of objects around a common midpoint you would link this group under a Group Object and just rotate the Group Object with its child objects. It's the same for camera rotations. If you want to circle the camera in an animation around another object or group of objects then just place a Group Object at the visual focus within that group and link the camera under this Group Object. Use the "Focus" camera function to center the Group Object in the camera view, then rotate the Group Object and the camera will rotate with it in a perfect circle and with the focus always centered to the object group. Crosshairs - If you want to move the center of the crosshairs, e.g., when scaling or rotating about the world axes, then simply grab the crosshairs at the center arrows and move them freely around. You don't need to change into a different work mode, e.g., in the "Rotate" work mode, you no longer need to change between "Rotate Object" and "Move Axes". Snap Functions - When moving objects, points or axes you can now apply different snap functions that will facilitate the positioning of the objects in the scene. If, e.g., you switch on the snap function for grid lines, then an object will be "caught" automatically by grid lines when near to them. The size of the grid can be edited of course. There are several additional snap options - you can choose to snap a selection to grid lines, grid points, object points and lines, object midpoints and object axes. Snapping can be done only in the 2D-working plane or in 3D mode with additional depth testing enabled. Viewport- and Camera Zoom - Use your mouse wheel to zoom directly into or out of activated viewport- or camera windows. Move Window-Detail - In addition to the separate work mode for positioning the viewport windows (with depiction of the window coordinates) you can now move the window-detail in any work mode just by pressing the left and right mouse buttons simultaneously and moving the mouse around. Undo and Redo - The undo and redo functions have been optimized so that they are a slightly faster and more compact. The maximum number of undos and the maximum size of memory reserved for the undo functions can be edited in the new customize dialog (menu entry "Files - Customize"). Additional Background Effects Rainbows - With primary and secondary bows and caustics effect within the bows. Clouds - Condensation Trails - A stripe mask can be defined to simulate condensation trails in the sky. Weather machine - A post processing particle effect simulates rain (stripes) or snowflakes or, e.g., floating particles in the water. Stars - The starfield generator has now been integrated into the atmospheric background model. You can also animate the star intensities, so you can produce a proper day to night transition where the stars become more intense as night falls. Viewport Render Engine - The CyberMotion viewport render engine has been revised and optimized, transparencies are now also shown in the viewport windows Raytracing Render Engine - Up to 250% faster rendering of complex scenes containing high resolution models New menu entries under the "View" menu Grid - Backface Culling - For closed shapes, only the lines of facets that face the viewer/camera are drawn. If the material property "Render all Facets" is activated then of course all lines of the object are drawn (as with normal "Grid" view mode). Grid depiction with backface culling is slightly faster and provides a clearer overview of the scene but when working on an object to add new points and facets it is preferable to see all lines and points in normal "Grid" view mode. Gouraud Shading - Like flat shading but with activated surface interpolation so that faceted objects appear smooth and rounded. You should use simple flat shading especially when rendering complex scenes or terrains, because it is simpler and therefore faster than gouraud shading. Bones - Transparent Skin - In the bones work mode skin objects are always displayed transparently, so can see where to place the bones within the skin. If you choose the menu entry "Bones - Transparent Skin" skins are then also represented transparently in all other work modes. Bones - Hide Skin - once a skeleton has been created and all skin points allocated to it you can speed up the creation of an animation by switching off the drawing of skins in the viewport windows. Bones - Hide Bones - No bones will be drawn in the viewport windows, e.g., for rendering preview animations. This setting is independent of the setting in the render options dialog. Usually, final renderings are done without bones but in the render options dialog you can choose to include bones in the final rendering, too. Material Texture-Blur and Mip-Mapping - when rendering objects lying deeper in the background, a single pixel of the screen obviously cannot display all the texture details represented by this screen pixel. Using only a single hit point from the object's surface would be like picking a color at random from the object, resulting in a noisy and flickering appearance. This will be even more disturbing when animating the scene. You could of course reduce this effect by applying a higher oversampling rate (antialiasing), but that is very expensive in rendering time and at great distances, e.g., at the horizon of planes, up to thousands of subpixels have to be computed. Instead, two new techniques will help you to reduce the noise at almost no extra cost in rendering time: - Texture Blur - For procedural textures you can now use the Texture Blur function, which blurs the procedural texture pattern with increasing distance. - Mip-Mapping - For bitmap textures you can now apply Mip-Mapping, a technique that generates additional bitmaps resized and filtered to lower resolutions. The original highly-detailed bitmap is used for surfaces in the foreground. For more-distant points one of the pre-filtered lower resolution bitmaps is applied. It's in fact a little bit like the opposite of the "Bilinear Filter" function, which reduces pixel steps in the bitmap when zooming into a bitmap texture. Combining both functions you get the best results and always smooth bitmap textures. Bitmaps can now be tiled with a given number of repetitions in x- or y-direction. Miscellaneous Visual Project Library - There is an additional menu entry under "File - Load Projects" that opens a window with a visual project library. Similar to other visual libraries you can manage your projects and load, save or merge files with a simple double click on one of the thumbnail pictures. When saving a project, a thumbnail copy of the last rendered picture will always be saved with it for later use in the visual library. Additional functions for all visual libraries - You can change the root path of any visual library by using a new folder button located under each thumbnail window. The additional arrow button next to the folder icon opens a popup selection with the last visited folder paths, so you can change in an instance between all additional libraries you may have set up. This function sets only the root path of a library - secondary folders subordinated directly to the root folder are still accessible via the selection box under the thumbnail window. Drag and Drop - now you can drag and drop CyberMotion project files directly from the Windows Explorer on the CyberMotion program window to load a project into the program. Light - Lensflares - Instead of a single light and halo color interpolation, a rainbow effect can be chosen for all effects. Shortcuts - New shortcuts for all important dialogs and for the render scene animation and render final animation functions are provided. Customize Dialog - Select "File - Customize" in the menu to call up a dialog in which you can define some general program settings, such as: - maximum number of undos and redos and the maximum memory for the undo files. - the maximum number of last saved project files displayed under the "File" menu. - usually CyberMotion uses a temporary folder stipulated by the Windows system for saving temporary files that are created and deleted again in the background while working with CyberMotion. Now, you can specify a particular folder to save these files. - the dialog for editing bitmap paths and the jpg compression have been integrated in the customize dialog Object Selection Dialog - Double clicking on a plus folder icon will expand the whole selected hierarchy branch. The -shift key with mouse click will mark the range from the last selected object to the currently clicked object. Object Selection in Viewports - More precise object selection by scanning the shape of objects instead of testing only their bounding volumes. This will result in smaller selection lists when several objects lie behind each other. Selection lists are now depth sorted, from the nearest object down to the most distant object. Animation Mode and Final Rendering - In former versions of CyberMotion particle systems and the motion blur effect were visible only in animations. To also have these effects in a still photo you had to set the starting frame of an animation equal to the destination frame, so that a one-picture animation was rendered. In version 10.0 all effects will be included automatically, even if you render only a still picture via the "Render Final" option. The still picture will be exactly the same as the picture that will be rendered in an animation at that timeline position. Previous versions: Expansions to CyberMotion 3D-Designer version 9.0 Expansions to CyberMotion 3D-Designer version 8.0 .topic 240 Orthogonal View in the Viewport Window or Perspective View in the Camera Window How do you work in 3D-space? Basically, the program distinguishes between two different types of space. Firstly, there is quasi "cubic space," in which all the objects are pictured orthogonally. Here, the objects shown in the work windows are composed of straight lines without any distortion due to perspective. While you are manipulating the objects you work in the orthogonal views and are able to view the scene from the front, back, top, bottom, right-hand or left-hand side. This, therefore, is similar to a technical drawing - without perspective. The viewport window can be freely moved in the three axes directions until you reach the preset of 3D area limits, which lie at 224 = 16.777.216 units. However, the window moves only to the edges of this area, so objects are always visible. The order of the X, Y and Z-axes is shown in the foregoing picture. The other space is camera space. Here you can move the camera location freely. Furthermore, camera-space provides perspective, so that the objects are subjected to optical distortion. It can be thought of as a pyramid-shaped space, in which the camera lens is located at the top of the pyramid and the viewing-angle is restricted by the sidewalls of the pyramid. Within the pyramid the objects are projected onto a plane, that cuts the pyramid in front of the camera-lens. Axes Systems - Left-Handed World Space The basis of a 3D space is a Cartesian coordinate system - 3 straight lines that intersect in a single point and stand perpendicular to each other. These lines form the x-axis, the y-axis and the z-axis. Each axis consists of a positive and a negative section starting from the origin. The picture above shows the axes definition for the CyberMotion 3D space: left (-x) to right (+x) and top (+y) to bottom (-y). Now, the direction of the z-axis determines if the axis is called a right-handed system or a left-handed system. When the positive end of the z-axis points out of the screen the coordinate system forms a right-handed system - if it points into the screen, like in CyberMotion, it is called a left-handed system. Object Space Each object is equipped with an additional set of axes forming the object space. These axes follow each movement of the object in world space, for instance moving and rotating with the object. You can use this axes system to scale an object along its object axes or rotate it about the axes in object space. Take a look at the cylinder in the picture above. To elongate it you simply need to scale it along its y-object axis. Object axes are also used to define the pivot points for hierarchical, multisectional joints, e.g. in robot constructions. Finally, all animation data is recorded in the object space in reference to the object axes system. If you, for instance, move or rotate the object axes system of an already animated object, you also change the behaviour of the object in the animation. Texture Space The texture space is used to align material textures or bitmaps with an object. For the texture space goes the same as for the object space. Each object has its own texture space and the texture axes follow all object transformations keeping the textures always in place. There are texture axes both for procedural textures as well as for each bitmap projected onto an object. Point - We can describe a point in 3D space exactly by defining the three coordinates for the x-, y- and z-axis. The individual coordinates present the perpendicular projection onto the corresponding x-, y- or z-axis. Facet - 2 points connected together result in a line. Adding a third point not lying on this line results in a surface description of a triangle. Almost all objects in CyberMotion are generated from these triangular surfaces (facets), because you could approximate almost any objects surface with them. This example shows a torus object build from many little triangular facets. NURBS - Patches - NURBS stands for "Non-Uniform Rational B-Spline", a special type of deformable 3D patch. A surface is created based on a low resolution rectangular grid. The individual points of this grid represent control points that form a surface of much higher resolution. By manipulating these control points you can very easily model smooth and organic shapes. The resolution parameter defines the initial point resolution of the surface. This can be changed at any time later in the working process. Analytical Defined Objects - There are some more basic shapes apart from the triangular facets. You can, for example, create a sphere as an object that is constructed as a basic object only defined with a center and radius, instead of approximating the sphere out of hundreds or thousands of facets. When you represent this object later in raytracing-mode, the sphere can be calculated in a very short time, since just the basic object has to be calculated for intersection by the viewing ray. On the other hand you can not manipulate the shape of analytical defined objects, for example by deforming it, since that would destroy the mathematical description of these basic shapes. Two spheres looking just the same - on the left an analytical sphere, only defined by a center and a radius, and on the right side a sphere constructed from 3000 facets to assure a smooth curvature of the object. The Surface Normal The surface normal is a vector standing vertical to the object's surface and is of importance in various respects. For instance, it is used to determine the surface brightness in respect of the light incidence angle. Another use for normals is to determine wether a facet has to be drawn or not. If, for example, you construct a sphere, then essentially only the front hemisphere needs to be drawn - the back half cannot be seen. On construction of an object the facets are created so that the normals are always facing the outside. When rendering the picture, only those facets are drawn for which the facet-normal is directed towards the camera's viewpoint. See also: Object properties - render all facets Finally, normals can be distorted, so that they do not stand vertical on the surface any more. This results in distorted light intensity calculations, but this is intentional. You can use this effect, for instance, to simulate a smooth surface on a faceted object. The picture above shows two identical sphere objects - only for the right sphere the surface normal interpolation was switched on. Then the surface normals of the facet and the adjacent facets are included in the calculation of the illumination of the facets, resulting in the impression of a curved surface. This makes the object look smooth and rounded. The advantage is obvious. You can build smooth looking objects from a smaller number of facets and save memory and time when rendering the scene. Distorting the surface normal enables even more possibilities: It can be used to simulate a raised appearance on the object's surface structure. If, for example, you put in a stripe texture and switch on the distortion of the surface normal, this distorts the normals towards the edges of the stripe-pattern. In this way, the calculation of the light intensity creates the impression that the surface falls away at the edges of the stripe. This example shows a tiled box. Although the box is totally smooth it appears to be constructed from several tile objects. But it is only a single object with a block texture and normal distortion assigned to the block pattern. More examples for normal distortion are water textures or the creation of irregularities in landscape textures. See also Normalenablenkung Interpolation Facettensichtbarkeit Ansicht - Normalen einzeichnen Normalen invertieren .topic 140 Workspace How the structure of the work space is organized Work Colors Specify the viewport's background- and wireframe colors Viewport Management Select views and depiction types for the viewport windows Menu Bar and Button Bars A complete list of the menu- and button bar functions Program Settings Some general programm settings can be edited here, like the managment of temporary (undo)-files or the definition of bitmap search paths. Undo and Redo How to undo work steps or repeat them again Visual Libraries How to manage visual libraries Color Range Editor How to create color palets Dialog Preview Options General preview options for the preview windows contained in the main dialogs for light, material and backgrounds .topic 3 After starting the program the main window appears with the menu bar and the two button-strips at the top, the tool-window at the left and the animation-button strip at the bottom of the window. The main area in the center is occupied by the four depiction-windows (viewports). Switch between Modeling and Animation Mode Work on a project is divided into Modeling Mode and Animation Mode. There are two prominent buttons at the top left corner of the CyberMotion window to switch between the two modes. In principle there is no great difference between Modeling and Animation Mode - you have the same tool menus for both work-modes, except that some of the functions in Animation Mode are no longer accessible in Modeling Mode and vice versa. In Modeling Mode all changes made to an object - e.g. the deforming of an object by working on individual points - are permanent changes of the object's shape while in Animation Mode every action is merely a transformation of the model data and can be undone at anytime by reversing the working steps or deleting the keys that were created automatically when manipulating an object in Animation Mode. Another Example: Scaling an object and its children in Modeling Mode will result in a permanent change of size of the model throughout the entire animation. If the children are deformed by this scaling this deformation will be a permanent change of the shape of the objects. If you scale an object and its children in Animation mode it will be only a temporary change of size. Moreover, the children will not be scaled at all - it is just their coordinate systems that are temporarily deformed by the scaling of their parents without influencing the children's object and animation data (Hierarchy Independent Animation). Besides these main differences the working process in Modeling and Animation Mode are very similar. In both work-modes you can move, scale or rotate objects, position and align the camera and start picture rendering at any time. Detailed descriptions about the differences and restrictions, depending on wether you are working in Model or Animation Mode, are provided in the corresponding chapters of the tool menus. Button-Strip 1 With the aid of the first button strip - which is located directly below the menu-bar - you have quick access to the most important dialogs, e.g. load and save projects, object-selection, light- and picture-parameters, object editors, materials, etc. These dialogs can also be found as entries in the menu bar. Button-Strip 2 Directly below button strip 1 is a second button-strip, on the left of which are nine buttons that allow you to switch quickly between the different work-modes and the camera-mode. In addition to these nine buttons are 4 buttons for preview and final rendering, the UNDO/REDO-buttons, the zoom scaling, the select box for the object-groups and finally the automatic snap-functions for easy positioning of objects. Tool Window This window represents a toolbox, which includes all the necessary components for editing objects. The content of this window changes, depending on the work-mode. In "Rotate Object" mode, for example, there are different parameters for rotating objects, while in camera-mode all relevant data for camera positioning and alignment is accessible in the tool window. There is always the possibility to grab objects and move, scale or rotate them directly with the mouse in the viewports - while all corresponding parameters are updated and shown in the tool-window simultaneously - or to go the other way round and change the parameters in the tool window, while the scene is updated in the viewport windows. Via the nine work-mode buttons in the second button-strip you can change to the work-modes Camera, Move Objects and Textures, Scale Objects and Textures, Rotate Objects and Textures, Move window-detail, Deform objects, Edit objects and Edit Skin and Bones. Animation Button-Strip The Animation button-strip at the bottom of the screen is only applicable, when you have switched to Animation Mode. The animation button-strip contains a slider and keyframe buttons, with which you can comfortably move forward and back between the individual frames of an animation. There are additional buttons to call up the animation editor, to play preview animations directly in one of the viewport windows, to record keyframes manually, and finally another set of buttons at the right end of the button-strip to define the individual tracks that should be considered for the automatic creation of keyframes. Depiction Windows - Viewports The viewport window shows you a detail of the 3D-space. You can move the window detail just by pressing left and right mouse buttons simultaneously and moving the mouse over the viewport window. Using simple mouse-actions, you can click on objects directly in the viewport windows to select, position, scale or rotate them. CyberMotion manages up to ten freely configurable viewport windows via the "Windows" menu entries. You can choose - from the "View" menu-strip - any view for each open viewport (front, top, right, back, bottom, left or camera-perspective) and depiction type (e.g. lines or flatshading). Parameter-Inputs Almost all parameters can be modified directly with the mouse or by the keyboard. When you have input a parameter over the keyboard, you confirm this input with a mouse-click or the return key on the keyboard. You need not be concerned about incorrect inputs - those that are outside limits are recognized and the input automatically corrected. .topic 40 The Work Colors dialog is reached by selecting the entry of that name under the "Options" header in the menu bar. In this dialog you can specify the colors that are used in the wire-frame, hidden line or flat shading modes during editing. To change a color, select the appropriate button, whereupon a color selection dialog appears, from which you can choose a new color. Object Color: When this option is checked the faces of the objects in hidden line mode are drawn with the object's selected material color. Switch this option off and a single color is used to depict all objects. All Facets: This color is used for the facets in hidden line mode when the box has not been checked. Selected Facets: Selected faces are drawn in this color in hidden line and flat shading mode. Lines: This determines the color used for wire-frame lines of all objects except those that have been selected for treatment. Lines - Sel. Object: The wire-frame lines of all of the objects that have been selected for editing are drawn in this color. This highlights the selected objects from the remaining objects that are only switched on - making them easier to work with. Reference Object: The lines for reference objects, which serve as reference-objects for scaling, rotation etc. of other objects, are drawn in the color indicated here. Sel. Reference Object: Reference objects selected for treatment are drawn in this color. Background: The background color of the view-window can be modified by this function. The original colors used in the dialog can be restored by operating the button. .topic 11 The viewport window shows an area of 3D-space through which you can freely move by using the "Move window" menu or by clicking into a viewport and moving the mouse, while holding down left and right mouse button simultaneously. Using simple mouse-actions, you can click on objects directly in the viewport windows to select them and then to position, scale or rotate them. CyberMotion manages up to ten freely-configurable viewport windows through the "Windows" menu bar. You can choose - via the "View" menu-strip - any view for each open viewport (Top, Bottom, Front, Back, Left, Right, or Camera-perspective) and depiction type (e.g. lines or flat shading). Managing the viewport windows The "Windows" menu bar contains functions to open new viewport windows and to arrange or close existing viewports: WINDOWS -------------------------------- Cascade - All open windows are made the same size and then arranged in a cascaded formation in the main-window. Tile - All open windows are scaled and sorted to fit side by side and under each other in the main-window. Arrange symbols - The symbols of minimized windows are sorted. -------------------------------- Close Window - This function closes the currently active viewport window. Open New Window - You can open up to ten viewport windows and provide each with a view and depiction-type. -------------------------------- Store Order on Finishing - The current window settings are saved when the program is closed if this entry has been activated. When you next start the program you will find that the work-area is the same as when you left it. If this entry is not active the standard four-window arrangement loads next time the program is started - which consists of front, top, right views and a camera-window. -------------------------------- View 0...: - At the end of the "Windows" menu-strip is a list of all open windows. You can change between active windows via this list or, for example, restore a minimized window and reactivate it. -------------------------------- Selecting a View for a Viewport You must click on the corresponding window to activate it before choosing a viewing mode for a viewport. Now go to the "View" menu-strip. A preceding tick identifies the preset view for the active window. You can choose another view for the window. The entries "Front," "Top," "Right," "Back," "Bottom" and "Left" corresponds to work-views in orthogonal drafting of the relevant direction. Working direct on objects is possible only in windows with these orthogonal views. If you choose the "Camera" entry, the window shows the perspective view through the camera - as used later when rendering the picture. Depiction-Type in the Viewport Window Grid, Hidden, Flat Shading Objects can be depicted in the viewport windows in several different modes - Wireframe, Hidden line or Flat shading mode. You can set this individually for each active viewport. For example, flat shading could be used for the camera-perspective and wireframe mode for all other views. You must activate the relevant window (e.g. through mouse-click), then you can change to a different depiction-type - via the entries "Grid," "Hidden" or "Flat Shading" in the "View" menu-strip: Grid - Only the lines connecting surface-points are drawn (wire-frame depiction). Grid - Backface Culling - Only the lines of polygons facing the camera are drawn. (Similar to Hidden Line, but the polygons are not filled). If the object property "Render all Facets" is activated, the object is interpreted as a surface object instead of a solid, therefore all lines are drawn, because both sides of a polygon can be seen by the camera. Hidden Line - Almost all objects in CyberMotion are generated from triangular polygons (facets). In Hidden Line mode facets are drawn in a single object color - no illumination is applied - with correct depth information using a simple z-buffer algorithm. Flat Shading - Like Hidden Line, but with illumination of the facets. Gouraud Shading - With this algorithm objects are drawn with smoothed surfaces. Gouraud Shading provides the best quality but needs longer to render. You should activate this algorithm only for the camera viewport and for scenes that are not to complex. If rendering gets to slow change back to Flat Shading or Grid depiction with backface culling. No Lines, Lines, All Lines As already mentioned, in CyberMotion almost all objects are built from of triangular facets. However, often it is not necessary to draw all three lines to get an exact image of the object. With a quadrilateral surface, for example, which is comprised of two triangles, you need not draw the middle line. This accommodates complex objects quicker and the drawing of the scene is faster. The object editors included in CyberMotion support this function by retaining the necessary information for each object to obtain the best possible depiction with fewest superfluous lines. No lines - None of the lines connecting the facet points are drawn. For example, you can decide to render the polygons in Flat- or Gouraud Shading with or without drawing the lines of the polygons. Lines - Lines are drawn in accordance with the facet information. All lines - Basically, all lines connecting the facet points are drawn. Points In addition, the point option can be switched on to clearly highlight the individual point-connections of the polygons. Furthermore, unconnected points (which can be generated in the "Edit Objects" work mode while creating new polygons) are only visible if the option "Points" is switched on in the menu-strip. Normals A normal is a vector standing vertical to each facet of an object. The normal is important on the one hand for the visibility calculation (does the facet of a solid object face the camera or not) and on the other hand for the light incidence and interpolation calculations of the facets. All the normals of selected objects will be shown in the drawings when you switch on the View/Normals entry. Skin and Bones - Depiction in the Viewport The following three options are applicable for the depiction of skeletons and skins when working on character animations: Bones - Transparent Skin - In the "Edit Skin and Bones" work mode the polygons of skin objects are always drawn transparent, so that the bones within the skin can be easily recognized, selected and aligned. If you switch on the "Bones - Transparent Skin"-entry for a viewport then the polygons of skin objects are always drawn transparent, independant of the current work mode. Bones - Hide Skin - Once the skeleton is created and all skin points allocated to the individual bones, then you can accelerate the setting up of an animation by hiding the skin for the depiction in the viewport windows. Only the bones of the skeleton are drawn and can be easily selected and aligned in the animation. Bones - Hide Bones - Using this option, the bones of a skeleton are hidden in the viewport windows. You can use this option to render fast preview animations in "Render Scene Animation"-mode without disturbing bones peeking in and out of the skins. Viewport Bitmap If you choose this function, then a bitmap is copied into the background of the viewport. The bitmap can serve as a construction plan in the Edit Object work mode for adding new points or facets. Select the "View - File" entry in the menu bar to call up the file select box where you can search for your image. Copy To All Viewports Choosing this entry will copy the settings of the currently activated viewport to all other viewport windows. .topic 4 In the Workspace-overview you have seen already how the CyberMotion window is arranged in menu- and button-strips, tool window and viewport windows. This chapter gives a detailed description of the entries of the menu bar and their corresponding icons in the button strips. In general for every important menu entry a corresponding icon is found in the button strips. Just move with the mouse over an icon and an explaining tooltip text will show up automatically. The buttons of the animation button-strip at the bottom of the screen are described separately in the chapters regarding the setting up of an animation. Button-Strip 1 With the aid of the first button-strip which is located directly below the menu-bar - you have fast access to the most important dialogs, e.g. load and save projects, light and render options, object-selection, materials, object editors, etc. Button-Strip 2 Directly below-button strip 1 is a second button-strip, on the left of which are nine buttons that allow you to switch quickly between the different work-modes and the camera-mode. In addition to these nine buttons are 4 buttons for preview and final rendering of pictures and animations, the Undo/Redo-buttons, the zoom scaling, the select box for the object-group and at the right end the snap functions that will help to align objects to the background grid or to the lines and points of other objects. The entries in the menu bars and button bars: File -------------------------------- Project Library Load Merge Save Save As Save - Selected Objects -------------------------------- New -------------------------------- Show Last Rendered Picture/Animation -------------------------------- Customize -------------------------------- Quit -------------------------------- 1-10 Last Project Paths -------------------------------- Edit -------------------------------- UNDO/REDO -------------------------------- Camera -------------------------------- Move Object / Texture Scale Object / Texture Rotate Object / Texture Move Window Edit Object Edit Skin and Bones Deform Object -------------------------------- Show Deformation -------------------------------- Show as Box -------------------------------- Render -------------------------------- Render Scene Render Final Render Scene Animation Render Final Animation -------------------------------- Options -------------------------------- Render Options Light Background -------------------------------- Work Colors -------------------------------- Animation Editor -------------------------------- Particle System -------------------------------- Magic Pictures -------------------------------- Objects -------------------------------- Select Objects -------------------------------- Extrude Sweep NURBS Primitives Analytical Primitives Landscapes Plane Text Function Group Object -------------------------------- Material/Color -------------------------------- Info -------------------------------- Depiction -------------------------------- Camera Front Top Right Back Bottom Left -------------------------------- Grid Grid - Backface Culling Hidden Flat Shading Gouraud Shading -------------------------------- No Lines Lines All Lines -------------------------------- Points -------------------------------- Normals -------------------------------- Bones - Transparent Skin Bones - Hide Skin Bones - Hide Bones -------------------------------- Copy To All Viewports -------------------------------- Window -------------------------------- Cascade Tile Arrange Symbols -------------------------------- Close Window Open New Window -------------------------------- Store Order on Finishing -------------------------------- List of Open Viewports -------------------------------- Help -------------------------------- Contents and Index -------------------------------- How to Register -------------------------------- www.3d-designer.com mailto:support@3d-designer.com -------------------------------- About CyberMotion 3D-Designer -------------------------------- .topic 660 Select "File - Customize" in the menu to call up a dialog in which you can define some general program settings: General Program Settings On the "General" page you can edit: Number of last saved project files - The files you have worked on recently are listed under the "File" menu. Here you can specify how much files you want to show up in the list (1..10). Maximum Number of Undos and Undo Memory - The maximum number of undos and redos and the maximum memory for the undo files - Each working step in CyberMotion is recorded and saved to temporary files. Depending on the complexity of the project and the kind of working step that has to be recorded, these files can become very large. Therefore you can limit the maximum number of files and the maximum memory used for them with these parameters. If the memory limit is exceeded, then the number of recorded undos will automatically be reduced. Path for Temporary Files - Usually CyberMotion uses a temporary folder stipulated by the Windows system for saving temporary files that are created and deleted again in the background while working with CyberMotion. But you can also specify a particular folder to save these files. Paths for Bitmap Textures On the "Paths" page you can input up to 4 different path names. Simply press any of the 4 buttons and select the desired path in the Select Path dialog that then appears. The program will look under these paths later for the bitmap files, which can be used to project textures onto object surfaces. You can also place your bitmaps directly in the folder that contains your project file. Only if the program can not find the picture there, it searches under the paths defined in the Bitmap Paths dialog. JPG On this page you are asked for the JPEG compression rate used to compress pictures when saving them in JPEG format. Small values will yield higher picture quality but also a larger file size for the picture. High values will produce smaller files, but might introduce ugly artifacts in the rendered picture due to the lossy JPEG compression algorithm. .topic 63 Undo and Redo functions are obtained over the "Undo / Redo" entries in the "Edit" menu-bar or directly from the -buttons in the button-strip. You can also use the short cuts "Ctrl" + "Z" (Undo) and "Ctrl" + "Y" (Redo). With the Undo-function you can return to previously concluded work-steps, with the Redo-function you can repeat them again. This applies to almost all changes relevant to the project. If you, e.g., change some parameters in the light dialog, you can undo these changes right after leaving the dialog by selecting the Undo-button. If you select the arrow buttons next to the Undo / Redo -buttons than a listbox opens with a list of the last work-steps. By selecting an entry you can undo or redo several work-steps at one go. The maximum number of operations recorded for undos and redos and the maximum amount of memory allocated for this task can be specified in the Customize-dialog. .topic 102 The illustration shows a detail of the visual library in the material dialog. Many of the main dialogs in CyberMotion contain visual libraries to provide a fast and convenient access to already existing projects, materials, backgrounds, landscapes or color range files. The main item of a visual library is the library window in which all available entries are represented by a thumbnail picture. At installation time, all libraries are created in the root folder of the CyberMotion installation. But you can choose also other folders as the root folder for your libraries. Use the -root folder button to open a fileselect box and to select a new path where you want to save your collection. Operate the arrow-button next to the folder icon to open a listbox with the last visited collections. This way you can easily jump to and froo between different collections. Below the root folder icon is another "Library" selector box. The librarie's root folders can contain secondary folders to provide several categories within the library. For instance, the visual material library contains secondary folders for wood-, rock- and landscape textures. With the help of the "Library" selector box you can change to a sub-category, if provided. Load a Library File To take over an entry from the library simply double click on the corresponding thumbnail picture in the library window. Alternatively you can mark an entry with a simple mouse click and then press the button. Save a File to the Library Operate the button to add the data you are currently working on to the library (for instance a material you are editing in the material dialog). After operating the button a file selector box appears, in which you can decide on the name for the file and the folder the file should be saved to. Every kind of data has its own root folder that is automatically chosen when calling up the file selector box. However, you can add secondary folders within that root folder to create sub-categories. Example - material dialog: you have modified a material that you want to save now to the library. You operate the button and the file select box appears with the path automatically set to the root folder for the material library: "c:\programs\cybermotion 3d-designer\material". If you intend the new material to be put into the root folder just enter an appropriate name for the material and save it. However, if you want to save the material to a special category, for instance to put a copper material into a secondary "metal" folder, you can add a new "metal" sub-folder within the materials root folder (see Windows documentation) and save the material to this new folder. Example: "c.\programs\cybermotion 3d-designer\material\metal\copper.mat". When you now leave the material dialog and enter it again, the "metal" folder will have been added as a new category to the "Library" selector box. Delete a Library File First select the file you want to delete by clicking on its thumbnail picture. Then simply operate the button. .topic 99 Many functions in CyberMotion depend on the definition of a color range, for instance, a sky background with a color range graduating from zenith to horizon or a procedural color range texture. To edit a color range simply click on the color range bar in the corresponding dialog and the color range editor shown above appears. With it you can easily add new colors or delete existing ones. The visual library on the right part of the editor provides pre-defined color ranges and the possibility to save your own creations to the library. Edit a Color Range - Add Color Entry A color range contains always one starting color and optional additional colors used to calculate the gradients in the color range. To add a new color, simply click on a free position in the color range. Under the color range bar is another small color button showing the current color at this position. Click on the color button to call up the windows color editor, there you can choose a matching color for the selected position in the color range. When you leave the color editor the chosen color will be inserted in the color range and a new gradient is calculated. Edit Color Entry Underneath the color range bar a horizontal line is displayed with small color boxes representing each color entry in the range. You can click on these small boxes to select an entry. Then simply click again on the color button beneath the line to again call up the Windows color editor where you can choose a new color for the selected entry. Move Color Entry Click on a color entry and move it while holding the mouse button pressed to the right or left. Delete Color Entry Select a color entry and simply press the button to remove it. Restore Color Range Operating the button sets the color range to the original state when the color range editor was called up. Using the Visual Color Range Library You can load pre-defined color ranges from the library simply by double-clicking on a thumbnail picture in the library window or save your own creations to the library by operating the button. For a general introduction to the library functions see also: Visual Libraries. .topic 106 Each of the three main dialogs for material, background and light settings contains a large preview window that provides various rendering qualities, two different resolutions and an automatic or manual update, respectively. These options are the same for all dialogs. In addition to these basic options each preview window provides a selector box right beneath the window that offers various selections of views and object groups for the preview rendering, arranged in regard to the specific demands for the individual dialog. Resolution - Click on the magnifying glass to change between the two possible preview picture resolutions (240 * 180 or 160 * 120 pixels). Quality The spheres underneath the preview window represent the four possible rendering grades for the preview calculation: 1. Simple Scanline algorithm. 2. Raytracing, without shadows and antialiasing. 3. Raytracing with shadows. 4. Raytracing with shadows and antialiasing. Auto - If this button is activated then each change in every parameter will cause an immediate redraw of the scene in the preview window. If the button is switched off you have to start a redraw manually by operating the button. This is advisable when working on very complex scenes and you want to adjust several parameters in a row before starting a new preview calculation. Stop Preview Calculation (ESC) If a preview calculation of a very complex scene lasts to long you can interrupt the preview any time by pressing the key on your keyboard. Of course complex scenes, for instance a landscape scene consisting out of millions of points and facets, ought to be rendered in a low grade rendering mode, preferable in raytracing mode without shadows and antialiasing. .topic 190 Project Managment Project Library - How to load, save or merge a project file into an existing project Import and Export How to import or export foreign file formats .topic 6 Project Library Selecting the "File - Project Library" entry in the menu bar calls up a library window with a thumbnail list of all projects located in the current root folder of the project library. Using the library functions you can easily load, save or delete existing projects. Use the functions described in Visual Libraries to add new collection folders or to change between your existing collections. Load, Save and Merge via the File Menu The five file entries "Load", "Merge" "Save" "Save As" and "Save - Selected Group" call up the file selection dialog box through which projects can be loaded and saved in various forms. To load or save to 3rd party file formats you always have to use these functions - you can't save or load foreign formats with the project browser. Load You can load the project-file of your choice via the file selection box. CyberMotion files are saved in files with the extension ".CMO". For a fast acces on files you have recently worked on, the last opened files are listed also under the "File"-menu. Just click on the corresponding entry to load one of these files. Merge Objects are loaded and merged into a scene without deleting current objects. However, there is the following exception: The camera, ambient light and the background are managed like all other objects in the program - which you can select and move, for example. However, there can only be one camera, one ambient light and one background. Therefore, when you select a file to merge a dialog appears in which you can choose to retain the current camera, ambient light, or background-object, or replace them with the objects included in the file. Additionaly you can choose to exclude further sun lights contained in the file. Save / Save As Once you have started a new project, you can save it using a suitable name by calling up the file selection box via the "File - Save" entry in the menu bar. Now, all object data, and also the settings for camera, light, background and animation data will be stored in a file with the ".CMO" extension, like "project1.cmo", for instance. If you choose this function for a project that has already been named, it will be saved directly and without showing the file selection box. If you want to save your project under an alternative name - to make a backup copy, for instance - you should use the "File - Save As..." menu function. This will call up the file selection box once again, allowing you to save a copy of the project under a different name. Save - Selected Group If you chose this entry, only those objects are saved that are selected at present in the Select Objects dialog, and which, therefore, are drawn in the viewport windows. Also included are the camera, background and the light objects, if they are switched on. You should use this function when saving individual objects to library collections. Switch of all other objects, lights, the background and camera and save only the particular object designated for the library with the "Save - Selected Group" function. Import and Export of Foreign Formats In addition to CyberMotion's own object format (suffixed "* .CMO"), other foreign formats can be loaded (3DS, DXF, RAW) and saved (DirectX, 3DS, DXF, RAW, VRML) with the aforementioned menu items. .topic 37 CyberMotion uses its own ".CMO" object format, in which, in addition to actual geometrical data such as points and facets, a great deal of further information on object structures and materials, as well as the light, background, camera and animation settings is also saved. In CyberMotion you have also the facility to save and load other foreign formats. You need only select in the file select box the corresponding extension to the relevant data format. If, e.g., you want to save a project to ".3ds" format, then use the normal "File - Save" entry in the menu bar, but before saving the project change the project extension from "project.cmo" to "project.3ds". 3DS DXF RAW VRML 2.0 Export DirectX Export 3DS-Format CyberMotion also offers export and import of the widely used 3DS format. This allows you to download free objects for use in CyberMotion from many websites on the Internet, but also from commercial sources. CyberMotion reads all relevant object and material data from 3DS files but bitmap textures and animation data are ignored. AutoCad DXF-Format The DXF format contains a very complex data configuration with a multiplicity at possible element-definitions. For CAD-programs, this is normally organized in different layers. CyberMotion imports most elements that have a three dimensional extent and sorts the individual items into different types. Most of the elements in DXF are analytical definitions of object configuration, and are converted by CyberMotion into the program's own internal object definition - based on triangular facets. Likewise, a variety of calculations are optionally executed - optimizing the objects with regard to memory demands (superfluous points are deleted), orientation of the normals and object size - which can take some time. Because the DXF format does not support textures - only color information - the imported objects are initialized with default material values. CyberMotion imports only ASCII DXF files valid up to version AC1009! DXF Elements high-drawn lines, (high-drawn) circle, high-drawn arc (high-drawn) ribbons (high-drawn) solids 3D-surfaces (high-drawn) (closed) (wide) 2D polylines approximated polygon nets approximated many surface nets The DXF-Import Dialog: In the illustration you see the dialog that appears when you want to load a DXF file. Arrange Objects According to: In the top-most box of the dialog, the options by which the DXF elements are assigned to individual objects are: Layer: If elements are on different layers, then all elements that are on the same layer are assigned to one CyberMotion-object. The layer name is adopted as the object name. Color: All DXF elements of the same color are assigned to the same object. Element = Object: In this variation, a single object is generated for each element found in the DXF file (max.5000). This option is included for special DXF elements. If, for example, the DXF file consists of 1000 3DFACEs (3 or 4 cornered facets), then you must also produce 1000 equivalent objects on converting to CyberMotion objects, because the DXF format manages each single 3DFACE as an individual element. One Object:- All elements in the DXF file are assigned to an individual CyberMotion object. Lines -> Band In DXF there is the lines element, which has no 3-dimensional expression and, therefore, is not imported by CyberMotion. If, however, the line is high-drawn, a 3-dimensional band is formed from the 2-dimensional lines. If you switch on the option "Line > band", then normal lines are interpreted as high-drawn bands. You could, therefore, import completely normal line drawings and generate 3-dimensional band objects from them. Additionally, the button could preset the depth of the band Remove Unnecessary Points Many programs export pure DXF 3D object 3DFACE data. With this, each three- or four-sided facet is a self-contained element defined by 3 or 4 points respectively. Most objects consist of facets with adjacent borders, where the same point is defined several times for the separate facets. If you switch on the option "Remove Superfluous Points", then all points that occur several times are used only once. The most integrated objects obtained in this manner require only 1/ 6 the previous number of points for the definition of the facet corner points. Or put another way: you could import up to six times more object points. With the tolerance parameter you can specify a value for the difference between two points less than which they are regarded as the same point. Adjust Normals The object normal is a vector standing vertical to each facet of an object. The normal is important on one hand for the visibility calculation and on the other hand for the light incidence and interpolation calculations of the facets. These normals are not considered in DXF format, so that CyberMotion must calculate the normal for each new facet. To optimize the facet visibility examination it is important that the facets are all unified to show the outside. If the option "Adjust Normals" is switched on then CyberMotion attempts to calculate a unified alignment for the normals of the facets of the objects. It could occur however, that the normals are then shown all to the interior instead of to the outside. In this event you must then apply the "Invert Normals" function in the "Edit Object" tool window to the object to reverse the normal alignment for all or individual selected facets. See also: The surface normal Adjust Size to Window If this button is selected the objects from the file are automatically scaled and adapted to the size of the viewing window. Circle-/Arc Segments With DXF elements such as circle, circular arcs and polylines, the circle arcs included in the DXF file are analytically defined with regard to position, radii, etc. With the Circle-/Arc Segments parameter you can determine the resolution to which these elements are broken down as CyberMotion objects. A high-drawn circle (e.g. a cylinder), where you have entered a segment number of 15 is subdivided into 15 segment parts (exactly as in the sweep editor). The segment number applies here, however, to an entire circle - with a segment number of 40, a circular arc of 180 degrees is subdivided into only 20 segments. Saving CyberMotion objects in DXF format If you choose "Save - All Objects" or "Save - Selected Objects" in the menu bar and in the file select box select the DXF extension, then you can export CyberMotion objects in DXF format. The program then generates a DXF file, in which objects are formed from 3DFACEs. A layer is included for every object (layername= object-name) defining which 3DFACEs the objects are assigned to. RAW-3D Format RAW 3D objects are organized very simply. A RAW file consists only of data for a number of triangular facets - without any other object information. The functions in the dialog correspond to that in the DXF import dialog. CyberMotion objects can also be exported in RAW format - generating a file out of triangular facet data only. VRML 2.0 Export (Extension .wrl) The VRML-export allows the integration of CyberMotion projects into VRML-capable browsers. This way you can create passable virtual worlds for the internet. You can also assign URL-links (internet related addresses like http://www.3d-designer.com) to objects. Then, in a VRML-capable browser, you can change to other web pages simply by clicking on the corresponding 3D object. It is also possible to enter a link to another VRML project file (Extension '.wrl'). Clicking on the corresponding object would result in a direct jump into the next 3D world. The complexity of scenes should be kept to an absolute minimum to facilitate smooth movements in a VRML browser. These properties will be exported: Objects - Color, Interpolation and Shininess will be saved. Bitmap textures - See: Converting to Foreign File Formats Illumination - The three standard types of illumination used in CyberMotion are supported - lamps, spots and parallel light. Even light decrease and spot cone interpolation are known in VRML, but don't expect too much from the realization of these light types in the different VRML browsers. Headlight - In the export dialog you have the additional option to switch on a headlight. The headlight always points in the viewing direction while moving in a 3D world. Background - The 3D-sky and the fog effect can be exported to a '.wrl' file. Even all three sky gradient colors can be seen again in a VRML browser. There is no mirroring of the sky below the horizon - instead, you can specify a ground color in the export dialog. This ground color can also be used as a replacement for the plane object. Since you can't see shadows in a VRML browser, nobody will notice that the ground is just a background color (Provided you don't rotate the world.) URL-addresses for 3D object links can be entered on the VRML-page in the material editor. DirectX-Export (Extension ".x") This format is used by Microsoft's DirectX-3D-graphics engine. You can now export CyberMotion objects directly to the DirectX- object format (extension "*.x"). CyberMotion exports objects to 3D-meshes with regard to colors, transparency and bitmap textures. DirectX specifies bitmaps must be square and the size must follow the basis 2^n, e.g. 32*32, 64*64, 128*128 or 512*512. Animation data will not be saved. More to take note of when applying bitmaps. .topic 85 There are some points to consider when exporting objects created with CyberMotion to foreign file formats. Usually bitmap textures cannot be placed freely on an object (label effect) as in CyberMotion. Mostly you have to assign and adjust bitmaps for each facet of the object individually. When exporting projects to foreign file formats, CyberMotion will do this job automatically by calculating the bitmap details for each individual face, but there are some restrictions: If a facet is only partly covered by a bitmap then bitmap texturing is switched off for this single facet. So, if you are creating objects with the intention to export them later in foreign formats like DirectX- or VRML-format, you should scale bitmaps accordingly, so that they will cover all relevant faces. If necessary add some more detail to the object by triangulating parts of the object. Specific effects in CyberMotion like alpha mapping, reflection- or transparency maps of course can not be reproduced in other object formats. When exporting objects in VRML-format, the object will be divided in several parts, each part assigned to its own individual bitmap and one possible remaining part without bitmap texture. The tile function isn't supported by all formats. Other features like nontransparent bitmaps on transparent objects will be supported. Some 3D-formats - for instance DirectX - require bitmaps to be square and the size must follow the basis 2^n, e.g. 32*32, 64*64, 128*128 or 512*512 pixels. Also most 3D-formats require specific bitmap formats. DirectX-3D-files e.g. support only "*.bmp" and "*.ppm" pictures and VRML-files expect pictures to be in "*.jpg"-, "*.png"- or "*.gif"-format. You should save your project files together with the relevant bitmaps in a single folder, so that other 3D-viewer or VRML-browser will find the pictures without the need to define additional paths. .topic 110 Practise makes perfect. You should have a look at these tutorials. Once you have gone through them you will have a good idea of the capabilities of the program. However, these examples show only a fraction of the possibilities of the program. You should study the other chapters of this reference book if you want to dig out the many other possibilities the program offers. Hall with Columns The beginners tutorial Facet Extrude In a few minutes from a box to a plane model Simple Animation How to set up an animation Animation and Object Hierarchies Complex movements about several joints using object hierarchies - how to construct a robot Particle Animation Examples How to deal with particle systems Landscape Design Create your own worlds Animation and Deformation How to animate object deformations Character Animation How to animate characters using skins and bones .topic 52 This tutorial is specially written to make introduction to the program a little easier, and also to demonstrate that great pictures can be rendered with very little input. The example demonstrates mainly the modeling and arranging of a scene, no animation is involved, hence it follows that all work steps are carried out in Modeling Mode. The object file 'halle.cmo' for this scene can be found in the projects folder, so you could produce a picture of the end result of this tutorial in advance. Setting Up the Objects The picture of the hall and columns is a typical example of how a scene can be produced from only a few simple objects, using reflections, shadows and textures, which on rendering leads to an amazingly complex and realistic picture. The object-file contains 22 objects that are built from only 5 basic shapes: 6 marble-columns for the hall 4 plinths for the columns 5 elements for floor, ceiling and walls. 4 large vases 3 light objects We start with the preparation of the object-data, beginning with the production of the first column: The column can be easily created from the analytical primitives menu. Analytical objects are described just by a few parameters instead of constructing them from real points and faces - their faceted representation in the viewports is just an approximation to their real shape. When rendering later in raytracing mode analytical objects are perfectly smooth and rounded and the rendering for analytical primitives is much faster as for faceted objects. Select "Objects - Analytical Primitives >" in the menu bar and choose the cylinder icon from the sub-menu. A dialog opens with two parameters for the cylinder radius and cylinder height. After you have entered the parameters (test data: radius 10, height 75), operate the button to create the object. Enter a relevant name for the object (e.g.:*Column1) in the dialog box that appears and confirm it with Return. As you will see, the column is now drawn in the viewport windows. Next, we now want to set up the plinth for the row of columns: Enter the extrude editor via "Extrude" under "Objects" in the menu bar. A new selection of buttons appears in the toolbox. Because we require only a simple rectangular base for the plinth, we can simplify the work by switching on the and buttons in the toolbox - points will now be constrained to a preset grid. When you move the mouse over the viewport the mouse-pointer changes to the form of a crosshairs and you can begin to draw a rectangular template in the window. Extruding a simple rectangular shape will result in a rectangular box. Leave the segment number on 1. The depth is unimportant, as we can later adapt the object's size to suit the columns in the "Scale Object" work mode. Simply create the rectangular object by operating the "Extrude Object" button. In the dialog box that appears you can once again give an appropriate name (Plinth1) and confirm it with Return. The large arrow-button over the viewport returns you to the main menu (or you can select the "Back to Main Menu" entry in the Options menu). "Column1" and the "Plinth1" are now drawn in the viewports. Click onto the "Plinth1" object in the viewport window to select it for editing. Alternatively, you could have called up the Select Object dialog to mark the "Plinth1" object by clicking on the corresponding name in the object list with the right mouse button. In "Move Object" mode, a dotted-line box appears around the selected object,"Plinth1". Click with the left mouse button into the viewport and move the plinth under the column (while still holding the left mouse button), so that the plinth stands exactly in the middle the base of the column. There are several possibilities to help you with this: Movement can be restricted to the vertical or horizontal direction with the three direction-buttons in the tool-window. With help of the snap functions objects are automatically catched near by the background grid lines or grid points or by the lines and points of other objects. You can increase or decrease the visible picture detail over the Zoom parameter in the button bar. If you have a mouse with mouse wheel at your disposal, just turn the wheel up and down to zoom in and out of a viewport window. You can move freely over the scene in the viewport windows when you hold down both left and right mouse buttons while moving the mouse over the viewport. In the Move Window menu you only need to hold down the left mouse button. There you can also read the window coordinates or enter them directly via the keyboard, The last operations can be undone using the Undo button or be redone using the Redo button, respectively. The plinth and the cylinder. The object on top of the window is the camera symbol. If the camera object is disturbing you during your construction work simply select it, call up the pop-up listbox by clicking with the right mouse button into the viewport, and select the "Hide" entry. The camera will still be active, but you can't select or see it anymore in the viewport. You can switch it on again in the Select Object dialog. Now we adapt the size of the plinth to suit the cylinder: Change to scaling mode by selecting the "Scale Object" entry in the Options menu bar or in the button bar. The plinth is still selected and crosshairs now appear exactly through the center of the object. Furthermore, the dotted-line box appears again around the plinth. The framed object can now be enlarged or reduced - again with help of the mouse. The crosshairs is the reference point of the scaling and can be moved simply by grabbing it within the 4 arrows at the crossing point. Then, when you want to scale the object, move the mouse out of the area within the 4 arrows of the crosshairs. After that hold down the left mouse-button and move the mouse to the right (left) or to the top (underneath), to enlarge (reduce) the object. The plinth is a little bit to thick, so scale down the height a little bit in "Front" view. Then activate the viewport window "View: Top" - which shows the plan view - by clicking on the viewport with the mouse. Top view Here you can plan the plinth's enlargement in depth along the z-spatial axis. With the help of the mouse, you should pull the plinth to a sufficient width so that two more columns can also be placed on it. It is also advisable to enlarge the picture detail over the Zoom - function. The scaling operations will have resulted in the plinth having moved so reposition the plinth in "Move Object" mode. If the cylinder is hovering above the plinth, this time choose the "Drop Selection" function to let the cylinder drop exactly on the surface of the plinth. The bulk of the work for the hall has been completed, as the remainder is dealt with very quickly through the Copy and Move routines. Go into the Select Objects dialog and select "*Column1" with the right mouse button. Create two additional copies of the first column by operating the button twice. These copies are recognizable by a "-" preceding the name. Double click on the first copy in the object list to be able to rename it and change the name to "*Column2". Change the name of the third column in the same way. Now leave the dialog and proceed to "Move Object" mode. The scene appears to be unchanged as all three columns occupy the same space. In the "Right-View" viewport click once with the left mouse-button on the columns. Since they lay on top of each other a small pop up box appears listing all objects underneath the mouse pointer and so you can easily select an object from the list. Choose "*Column2". Using the mouse, move the marked column to the right, to the middle of the plinth. Restrict the movement to the horizontal direction by previously selecting the horizontal "Mouse Lock" button. The coordinates of one of the corners of the objects framework are always indicated in the tool box for exact positioning. You can stipulate the corner to be indicated just by selecting one of the corresponding corner buttons in the cube displayed in the tool window. Incidentally, the object can be precisely positioned by inputting the corner coordinates directly from the keyboard. After you have precisely positioned the second column, you can turn to the third column to move it to the end of the base plinth. Click again once on the two columns that still overlap each other to mark the third column. After you have moved the third column to the right position, you can then select the column-center coordinates again to check the correct spacing of all the columns against each other. Use the Select Object dialog to copy an additional plinth to use at the top of the columns and place it accordingly all in the same way as you have set up the additional columns. One row of columns with plinths at the bottom and top in the "Right" view. We also need three columns and two plinths now on the other side of the aisle. This is even easier as we simply copy the whole scene completely and then position the entire copy on the other side. Return again to the Select Object dialog. Mark all three columns and both plinth objects with the right mouse button while simultaneously pressing the button for a multiple selection. Now you need only operate the button and all objects are duplicated. You could have done the same directly in the viewport windows without calling up the Select Object dialog. Just click with the left mouse button on an object in a viewport to mark it. Again, hold down the button to add another object to the selection. While holding down the -Shift button you can remove objects from the marked selection. Holding down the or button and moving the mouse after clicking in a viewport, you can drag a frame about a group of objects to mark or unmark several objects at a time. Click with the right mouse button to call up a popup-listbox. There you can also select the copy functions for the marked object group. The object-framework is now drawn around all marked objects. Now, in the Front or Top view, simply position the marked objects to the right and, the colonnade is ready! Columns and plinths centered in "Top" view What are still missing at this time are the floor, ceiling, walls, lamps and the vases. As we later want different materials for the walls, the floor and the ceiling we construct them from separate blocks. We have already the plinth objects with the correct length of the hall, so we just copy one of the plinths, rename it to "floor", move it down neatly under the bottom plinths and then move it horizontally to the center between the plinths. Change into the "Scale Objects" work-mode again and widen the floor, so that it fits at both ends with the end of the plinths. Hall with floor After the floor has been put into place just copy it again and to move the copy as the new "Ceiling" above at the top of the columns. Now the side walls. Again, we make a copy of the floor object. We could scale the new object now - squeeze it horizontically and stretch it vertically - to reshape the side wall. But, for practice, we choose another way. We will rotate the copied floor object about 90 degrees and move it to the side. Then we only need to scale the side wall vertically as the thickness of the wall will be identical with the thickness of the floor. Change into the -Rotate Object work-mode. Now, when clicking into the viewport and moving the mouse, marked objects are rotated. We want to carry out the rotation in "Front" view about the axes standing perpendicular on the window plane. In the Rotate Object menu you find again a Mouse Lock function, that restricts movements to certain axes. If the left button is active, objects can be freely turned about axes that are horizontal and vertical in the viewport. With the horizontal arrow button activated, objects are only rotated about the vertical viewport axes. It's much the same with the vertical arrow button - all rotations are always about the horizontal axes. But we want to rotate the wall about the axes that points directly out of the viewport window, therefore we select the last button with the circle. Now just rotate the side wall with the mouse about 90 degrees until it's standing vertical. The rotation-angles can be read at any time in the "Angle of Rotation" box. You could also have entered the angle of 90 degrees for the z-axes directly in the "Angle of Rotation" box and press the button afterwards. Now, by scaling the wall vertically and moving it, you can obtain the required size and position of the sidewall. Copy the sidewall and position it on the opposite side. The final hall in "Front" view Finally, we place one more wall exactly at the end of the colonnade. Later, this should create the illusion of infinite depth for the colonnade by simple reflection. Next we want to create a number of vases for the spaces between the columns. Go into the sweep editor. On the right side of the indication-surface you can produce a flat template, which when rotated about the central axis results in a sweep-object. Design a template to your liking similar to that depicted above, which, by sweeping, results in a beautiful vase. A segment-number of 12-15 should quite suffice to convey an impression of roundness later on depiction by object interpolation. Generate the object and return to the main-menu. Scale the "vase," so that it suits the size of the colonnade. Then copy it three times and position the four vases on the plinths between the six columns. Finally we must set up three more objects that will accommodate the lamps for the walk. Go again either into the extrude or the sweep editors and construct three lamps to your taste, which approximately fit the colonnade. Fit them into the column ceiling. The Camera View through the camera directly into the Hall Next we want to position the camera. Choose the Camera button in the button-strip directly above the viewports. Aids for aligning and positioning the camera appear in the tool-window, as well as buttons for moving the camera on various axes-systems. It is the intention to move the camera to a point directly within the colonnade just in front of the first pair of columns. Positioning the camera is very flexible in CyberMotion: Use the arrow-buttons beside the camera-coordinates to move the camera to the required position. The three , and buttons decide if you move along the world-axes or camera-axes or circularly about an object (see also: "Camera" menu). Clicking in the camera viewport window and moving the mouse is another possibility to move the visible camera detail - again along the previously chosen axes system. Holding down the -button while moving the mouse will move the camera to the front or the rear along the world- or the cameras own z-axis, respectively. This way of moving the camera by clicking in the camera viewport can be done in all work-modes, not only here in the camera menu. Then, you can go into the Move Object mode again. There, the camera, like all other objects, can be selected for editing and positioned with the mouse. In the "Camera" viewport window you can see directly how the camera-picture-detail changes. Back in the camera-menu you can plan a more exact adjustment of the camera-position with the arrow-buttons. You can also input the camera coordinates directly. You want, however, to capture the foyer in its entirety in the picture. This is not possible with the normal preset width of focus. Therefore, we simply put in a different width of focus with the help of the "Zoom" parameter, to achieve a wide-angle effect. In our example I have chosen a value of 23. With these settings you should see approximately the same picture in your camera viewport-window as shown in the illustration. Light The complete scene has now been set up and we can now turn to the illumination. Go into the Light dialog, which is reached through "Options - Light..." in the menu bar. In the light dialog you can generate new light-objects and determine all necessary settings for the light's color, intensity and alignment. If you call up the dialog for the first time you will see that two light objects already exist in the selector box. These are the light object "AMBIENT" for the area brightness and the light object "PARALLEL", which represents a light-source with parallel light-diffusion without origin. These light-sources are generated at the program startup so that, from the start, preview pictures have basic background illumination. For our scene we want three lamp-objects with radial light spread - as a normal lamp. The parallel light-source is superfluous for the present, so we just switch it off. You obtain the required three lamps simple by operating the button in the top-left dialog field three times. The three lamps are then immediately shown in the selector-box and can be edited. Another possibility is to generate only one lamp, edit it, and then finally go into the Select Object dialog to make 2 copies of the lamp. Light-objects are treated similarly to all other objects and can be copied as well as deleted and switched on or off in the Select Object dialog. When you generate a lamp a mass of new parameters for this light type are shown in the right half of the dialog. Don't worry - for our tutorial we require only the light-color and the reduction of light intensity with distance. All the other parameters on the right side are for light-effects formed by lens-reflections in the simulated camera-lens and do not apply in our picture. Now, via the Light Color button we enter the light-colors for all three lamps. The Halo Color is required for light-effects and can be ignored by us. For the lamps we enter a very bright yellow (e.g.: RGB = 255,255,220). In addition we have to enter the light intensities for the lamps. In the real world the intensity of a point light source reduces in proportional to the square of the distance, i.e. by doubling the space between the object and the light source the light intensity is reduced to a quarter. In computer graphics, however, this does not lead to satisfactory results (is there something in the real world that comes close to a real point light source?). CyberMotion uses a special filter to reduce the light intensity with distance and to enter an appropriate intensity for the lamp you simply have to specify a maximum radius up to the distance the light intensity will almost reduce to zero. You can enter this distance via the "Light Intensity - Maximum Range"-parameter located directly beneath the light color button. The hall is 400 units long so we enter a light intensity or maximum range, respectively, of 800 units for a first approximation. Since all 3 lamps have overlapping ranges of maximum distance we may have to reduce the intensities later on. The area-illumination in the picture must not be forgotten, because plenty of scattered radiation also originates through the many mirrored walls. Therefore select the light object "AMBIENT" in the list box and enter a dim yellow of about R= G= 70, B= 50 for the general area brightness. But that's not enough for our mirrored scene. Normally, Photon Mapping is used to simulate the interaction of light particles in a room, but this is a very time consuming algorithm and it is more suited for non-reflecting walls, other than our mirrored walls. In our scene all light incidence comes from above. The mirrored walls and also the light marble floor would reflect light back to the ceiling. Now we can use the parallel light again that we switched off at the beginning. Enter an inclination angle of 90 degree, so the light is directed vertically against the ceiling and switch it on again. Switch the parallel light on again and switch the shadow casting off, otherwise no light would penetrate through the floor. Enter a very low light intensity that is just enough to throw a little light back to the ceiling. Such tricks are very often used in computer industry and this special trick is called "fill lights". Even in real photography fill lights are used, e.g. during photo sessions when white boards are used to bounce spill lights back to the subject. Fill lights are also often used to produce special reflections on objects or to save time on complex shadow calculations. We are now finished in the Light dialog and can go back to the main-menu. Here you see, there are now drawn three circles each with a cross, that represent the three light sources. These still have to be placed by moving them to the lamp-fittings. Go again to the Move Object mode. There you can select and position the light objects again in exactly the same way as previously with all other objects in the scene. So that the light is not shielded by the lamp-fittings, the object-attribute "No shadows" has to be switched on for the lamp-fittings in the Material dialog. In addition switch on the glow material effect for the lamp fitting to simulate a visible light body. Of course, the object is no real light source - it is just painted in a brighter color, the light emanates from the point light source centered within the lamp-fitting. (Note: CyberMotion offers the possibility to turn each object into a real light source, e.g. for area lights, but the method described above - to put a standard light into an object with material glow switched on is pretty much faster when calculating the picture.) Adjusting light intensities - When all 3 lamps are in place we can turn back to the Light dialog to adjust the light intensities. Select the "Camera, complete scene" preview mode in the select box beneath the preview window. If redraw is selected then you get immediately a first test rendering of the complete scene. On the basis of this preview picture you can adjust the light colors and intensities until you are satisfied with the illumination. A first test rendering in raytracing-mode. (Start the rendering via the "Render Final" button.) Up to now no materials and textures are involved, but the scene looks already pretty nice. Material After you have dealt with the above, we can now deal with the somewhat more-complicated material adjustments: Choose the "Material/Color" entry in the Object menu-strip. Before you can put in the material for a particular object, you must first select the object in the selection box of the Material dialog. Objects that don't require any material-settings, such as camera, background and all light-objects, appear gray and cannot be modified. You have the following possibilities to choose a material-setting for individual objects: Set up the material by hand on your own Simply load a material from the library. Load a material and change it to your own requirements. Producing individual textures is explained in detail in the following. You can save yourself work by simply selecting the corresponding material out of the library, but for each application at least colors and gloss/reflection-parameter usually have to be entered. You can use the program to quickly produce materials of your own from similar examples and continually adapt the material library to your own requirements. Here are the material-settings for the individual objects: Columns, Plinths and Ceiling - For the columns and plinths we simply load a bitmap-texture and project it onto the objects. To choose a bitmap for an object, change to the side by selecting the corresponding tab at the top of the dialog. A list box at the top of the page shows all bitmaps used for an object. You can add new bitmaps, copy existing ones or delete bitmaps from the list. If you choose now the button, a File-selection box appears in which you can select a picture to produce the texture. Select the "Boden4.tga" bitmap, a special marble bitmap suitable for our project. Now we have to choose one of the projection-types. For the cylindrical column objects the "Cylinder<" projection is correct, as you would expect. The bitmap is wrapped around the cylinder in much the same way as you would affix a label to a bottle. For the plinths we choose the "Plane ><" projection instead. The bitmap-picture is simply projected onto the front and rear of the object. This can be compared with a slide picture projected onto an object. The arrows in the "Plane ><" mode indicate that the projection runs through the object and by doing this also the rear of the object is textured with the bitmap. The projection direction is orientated along the z-axis of the texture axes belonging to each bitmap. We still have to align this z-axis because on generation of the new added bitmap it will show always to the front in direction of the world z-axis. For our plinth we need the projection directed from the top downwards onto the floor space. Therefore we leave the dialog und change to the Rotate Object work mode again. At the top of the tool window select the "Rotate" tab. In the "Select Texture/Bitmap" select box then choose the bitmap "Boden4.tga". Thereupon a grid appears in the viewports presenting the exact bitmap dimensions and the 3 texture axis of the bitmap. In top view you can see the z-axis alignment along the world z-axis. Now just rotate the bitmap grid with an angle of 90° about the x-axis. That's it, now the top and bottom sides of the plinth are textured with the marble bitmap. But the side walls of the plinths are not included in the projection yet. There are two solutions. You can increase the projection angle for the plane projection up to 90°. This will cover also the side walls but in the same way as it would be with a real slide projector, the parallel projection will appear as stripes along the side walls. A more elegant way is to add a second bitmap, again the "Boden4.tga", and to rotate it in the Rotate texture mode so that it casts the projection along the x-world-axis onto the side walls of the plinth. The texture-bitmap includes all necessary parameters (picture-information) for colors and textures, and so we need only enter the shine and reflection-values for the individual column/plinth objects. We return to the Material-side of the dialog by choosing the button. For Reflection we enter a value of about 0.30 and a highlight-value of 0.29. The reflection is important for the intensity of the highlight and the degree of reflection, while the highlight-parameter describes the radius of the reflection. We do not switch on the mirror-button for the marble-material, however, because it could result in a confused and unappealing pattern caused by mirror reflections overlaying the texture. It is therefore only rendered with highlights. However, you might nevertheless switch on the reflection and lower the reflection share a little so as not to cover up the texture-impression. Floor - Next we want to describe in the material for the floor: For the basic material color we choose a white color, both for the diffuse and the specular reflection. For the Reflection parameter we give a value of about 0.25 to obtain relatively weak highlights and reflections. The Highlight-value for this surface should be relatively high - about 0.76. We must switch on the Reflection-button so that it later mirrors the area in the surface. As we want to use a checkered texture here, we change to the procedural Texture side of the dialog. For the relatively simple checkered texture we choose the block pattern from the top-most dialog-field in the list box and set matching block-dimensions of (X, Y and Z= 15). Leave the Net-Width parameter on zero so that there are no spaces between blocks. Than choose a black texture color to contrast with the white material color. Other parameters are not necessary for this relatively simple pattern and we can return again to the material-side. The sidewalls in the demo-picture are also mirrors (reflection switch on, reflection of 0.25, highlight 0.07) and have a dark golden basic color (R, G, B= about 68,51,0). The walls serve as pure mirror-surfaces in the picture and each manage without texture. Lamp-objects - There are three point light sources directly within the lamp-objects. If the picture is rendered later with the shadow option, no light penetrates to the outside from the lamp-objects and nothing is seen. First, we switch on the object attribute "No shadows" for the lamp-objects. In addition, the lamp-fittings should simulate visible illumination-sources (see explanation about light settings above). We obtain that by using a self-luminosity of 0.9 for the Glow parameter and a bright yellow for the lamp's color, perfectly simulating the light object. The material for the vases - I will leave the construction of the material for the vases to your imagination. Load a material from the library and play with the parameters. Explore the vast possibilities, you can't do anything wrong. If you want to undo your modifications just leave the dialog and press the undo button in the button bar. Picture Parameters We can now set down the parameters for rendering the picture in the Render Options dialog. Choose the "Render Options" entry under "Options" in the menu bar. The scene is not very complex, thus we can start picture rendering from the beginning with the high quality raytracing algorithm. Then set the buttons for shadows and reflection in the "Raytracing" box. All effects that you choose in this box can only be rendered with the raytracing algorithm. Also switch on antialiasing, which improves the screen image by smoothing steps between pixels on the screen. Input a value of 1 for reflection-depth, which is perfect for mirroring the foyer (walls, floor and ceiling). The antialiasing default depth value of 1 is sufficient for this picture. Leave the dialog and start the calculation of the rendering of the first test-picture with the button in the button strip above the viewports. .topic 87 In this manual's section on "Facet Extrude" it is shown how to add new segments to objects simply by selecting facets and dragging them in or out of the object. You can even construct whole objects in this manner. Our example demonstrates this by modeling a plane from a simple box in no time. The starting point: A box generated from the primitives menu. The box represents what will later be the fuselage of the plane and has to be stretched a little bit in "Scale Object" work mode. Change to "Edit Object" work mode.Select the facets on the right side of the box and activate "Facet Extrude" mode. Now click with the left mouse button into the viewport window and drag an additional segment out of the box, while holding the left mouse button pressed. This segment forms the connection to the tail. Now scale the still selected points on the right side of the segment symmetrically ( -button active) in "Scale Object" work mode. Move the point selection in "Move Object" work mode upwards, so that it lines up with the left segment. Back in "Edit Object" menu extrude the still selected facets again to the left to form the tail. Now select the front and back facets of the new created tail segment. Then, another "Facet Extrude" operation will drag out new segments on both sides of the tail segment simultaneously, thus forming the tailfins. Scale the same point selection in the vertical direction to taper the fins at the ends. After that, deselect the points at the right end of the wings and scale the remaining points at the front horizontally to create a little beveling. Then move the points a little bit to the right until the tail resembles the illustration shown below. Next we select the upper facets of the tail as depicted above. We apply again the "Facet Extrude" function to drag out the rudder segment of the plane. Scale and move again the points of the newly-created segment to taper the rudder to the top. Now select the side facets of the front fuselage and drag 2 new segments out of them. The fuselage viewed from the left. The outer, lower points of the new segments were selected and moved upwards. After that, select the facets of the new segments again and... again do a "Facet Extrude" operation to drag out the side wings of the plane. Scale and move the points again to taper the wings in the same way you have done it with the tail wings. The nose of the plane is still missing, so we select the front facets of the fuselage and drag them sideways to the left. Then a symmetrical scaling of the selected points is again carried out to taper the nose. The cockpit of the plane is produced likewise. The finished plane in camera view. Somewhat bulky but quite recognizably a plane. And this is how it looks like after applying the smooth function from the "Edit Object" menu twice. .topic 57 This tutorial concentrates on the preparation of a complete animation out of existing objects, rather than composing and editing a scene. Load the file "anim_a.cmo" into the program. You will find the file in the folder "project\ anm_demo\ anim_a.cmo." The file includes the objects and will serve as the starting point for the tutorial. However, if you want to make a picture of the end result beforehand, you can look at the complete ready-made object-file "project\ anm_demo\ anim_b.cmo" in the same folder. In addition to camera, light and background objects, the file "anim_a.cmo" also includes both the objects pictured above - objects "framework" and "segment". The yellow "framework" object can be set up in the extrude editor with help of the Hole function. The exactly-matching 5 red elements of the "segment" object can also generated in the extrude editor and combined into a single object in the Select Object dialog. These two objects should now serve as the starting point to set up the following animation: At the start of the animation, the "framework" is seen on its own, rotating about its horizontal axis. The "segment" object then comes from a position lying behind and slightly below the camera viewpoint and flies into the picture, turning around its crosswise-axis. The camera approaches the objects, looking towards their slightly lower position. At its closest point it is at the same height as the objects. The "segment" object meets the "framework" object at the moment that the rotation of both objects line-up - so that they mesh together exactly, as in the above picture. However, both objects continue to rotate and the "segment" object flies on beyond the "framework". Both the "segment" and the "framework" continue to turn entirely about their axes and the animation then ends. In the Front view, in work-mode, the final frame of the animation is exactly the same picture as the start picture. In the camera-view, however, due to perspective and the setting of the zoom-parameter, in the final picture the "segment" fits exactly within the circle of the "framework", resulting in an interesting new pattern. First we have to change from Modeling Mode to Animation Mode by pressing the -button in the button-strip. In the tool window some functions are hidden now or exchanged for other functions available only in Animation Mode. Furthermore the animation button-strip at the bottom of the screen has been activated. The animation button-strip - a detailed description for each button is provided in the corresponding chapter "Animation Button-Strip". Here a short summary: With the -button at the left you call up the animation editor in which you can edit the timelines and animation tracks of the individual objects. Using the slider and the green navigation buttons you can move forward and back in the animation timeline. The blue buttons are the play-, play range-, loop- and ping pong-buttons. Use them to start preview animations directly in one of the viewport windows. Click on a viewport window to activate it before starting the preview. To stop a preview animation simple click anywhere with the mouse button or press any key button. The red Record button is for recording keyframes manually. The five following buttons behind the record button are the track buttons. Every time when you manipulate an object a keyframe is generated for all tracks that are activated here, e.g., if the rotate track is activated and you move an object, then a position keyframe is generated automatically because of the movement of the object and additionally a rotate keyframe because the rotation track was selected in the animation button-strip. This ensures a fixed position and alignment of objects in time. (See chapter on animation button-strip for examples) The last two buttons activate the generation of keyframes for all children in a hierarchy or for the whole hierarchy up to the topmost parent, respectively. Now let's jump in. The "framework" turns about its Y-axis, but does not change its position. The "segment" comes from the front from the negative Z-direction and flies, rotating, through the "framework". At one point "frameworks" and "segment" are at the same location and fit together exactly. This situation corresponds to how both objects have been saved on file. This situation can be used as a key-scene. So we have to create keyframes for the objects and then to move this situation forward in time, since the object should overlap in the middle of the animation and not at the beginning. Normally, keyframes are generated automatically every time you move, scale or rotate an object. But we need this scene here at this time as it is, so we have to create the keyframes manually just by pressing the -Record button. But previously we have to choose the corresponding tracks for which keyframes have to be created. In this animation objects are only moved and rotated. Therefore we activate the corresponding buttons in the animation button-strip for position- and rotate-tracks. Then mark the "framework" and "segments" objects in the viewport window. Finally just press the record button and keyframes are generated for both objects. But these keyframes hold the data we intended for a later moment in the animation. Therefore call up the animation editor now. Here you can see the position- and rotate-tracks with the keyframes on frame 1 that we created by pressing the record button. These keyframes we copy now to a new destination point to the front of the animation. Select with a mouse click the "framework" object and then - with a mouse click and holding down the button - add the "segment" object to the selection. In the timeline window choose frame 1 with a mouse click on the first frame in the timeline. Now, in the "Cut, Copy, Paste"-box select the -Copy button. All data of all marked keyframes are copied now in a temporary buffer. Now, choose frame 20 as destination point in time by clicking on frame position 20 in the timeline. Paste the copied data from the buffer to this position just by clicking on the -Paste button. The timeline representation after the copy and paste operations. The key data on frameposition 1 and 20 are identical now. Keyframe 20 will remain as the goal for referencing the segment with the framework. We want to change the scene in keyframe 1 to edit the approach of the "segment". We must make frame 1 the current frame and this is accomplished simply by clicking again on frame 1 with the mouse. Back in the work-mode, select the "segment" object and move it in the negative direction of the Z-axis to a new position of -1000. For this it is best to go to the Right view. To prevent vertical movement, previously select the horizontal arrow-button in the tool-window or simply give the new Z-coordinate directly over the keyboard. After the redraw you can see movement-path of the later animation. The "segment" moves from your new position in keyframe 1 in the direction of the "framework" at key frame position 20 in 20 steps. Now come the rotations. The special effect of the final animation come about whereby both objects rotate about different axes and then line up exactly at the moment they coincide, so that the "segment" object can fly through the "framework" object without touching it. We must change, therefore, into the "Rotate Objects" menu and rotate the individual objects about their axes in the first keyframe as follows: 1. The "segment" object through -90 degrees about the X-axis. 2. The "framework" object through -90 degrees about the Y-axis. This is all that is necessary for the first keyframe. The scene should appear in the three views, as shown in the illustration above. Now we want to turn to the third key-scene - the position after flying through the "framework". Use the navigation-button-strip to move to the current end of the animation at frame 20. It is reached by a single click on the -button. The next keyframe should follow 20 frames later. By operating the - button twice - each time extending the animation by a further 10 frames - we reach frame position 40. This automatically extends the animation by 20 frames. The "segment" object is still in the same relative position to the "framework" that it occupied in the last keyframe - at frame position 20. We select the "segment" object and this time move it in the opposite direction, (i.e. in the direction of positive Z-axis) to the position z= +1000. Because the object has changed position, a new keyframe is automatically generated at frame position 40, so we no longer need concern ourselves with the animation-editor. You can now see from the path that the "segment" moves through the "framework" on a straight line. Now we must deal with the further rotation. We therefore rotate again: 1. The "segment" object about +90 degrees about the X-axis. 2. The "frameworks" object about +90 degrees about the Y-axis. Note that the rotation is in the opposite direction. Where we rotated through -90 degrees in keyframe 1, we now rotate about +90 degrees. This is because keyframe 1 in time is before our starting-situation and the present keyframe 3 refers to objects as they are in time after the starting-situations. The scene representation on frame 40 after the "segment" has flown through the "framework". Now only a forth and last keyframe for the objects remains. With it, the "segment" again moves about 1000 units further and both objects should carry out a final rotation, which brings you to the final position. Seen from in front, both objects then fit together exactly again, but now they stand about 2000 units apart on the Z-axis. Above in the Right view you can see the goal in keyframe 4. To reach this result, exactly the same step are executed as in the third keyframe, therefore: 1. Move on a further 20 frames with the - button to frame position 60. 2. Move the "segment" object about +1000 units in the direction of positive Z-axis. A forth keyframe is thereby automatically generated for the object. 3. Rotate the "segment" object +90 degrees about the X-axis. 4. Rotate the "frameworks" object +90 degrees about the Y-axis. Thereby a forth keyframe is also generated for this object. The object movements in the Right view Now that the positioning of the objects has been settled, we can move the camera and light positions. Beforehand, however, you could look again at the precise movement-path. Operate the - button, to return to the start of the animation on the first keyframe. Now choose the - button to start a Preview animation in the active viewport window. To run the animation in another viewport simply activate the corresponding window with a simple mouse click before operating the button. As the animation runs you can follow the precise movements in the different views. This is especially true for the camera-window, which shows the point of view of the animation later. The camera, however, still has to be animated and we come to that now. Camera Animation of the camera is accomplished quickly and simple. Change into the Camera-menu and return to the start of the animation at frame position 1. In this example-animation, a simple camera-movement is implemented - from an elevated viewpoint down to the same height as the objects. Simultaneously the picture is zoomed into the scene. Only 2 keyframes are required. 1st. keyframe In the Camera toolbox, move the camera to the following position: x= 0, y= +500, z= 1130, using either the cursor-buttons or by inputting the coordinates directly from the keyboard. Select the "framework" object directly in the viewport with the mouse. We do not need to put in the camera-angle by hand, because, if we now operate the "Camera" button, the camera is automatically lined up on the marked "framework" object. Now we need only adapt the enlargement. Set the "Zoom" at a value of 38 for the 1st. keyframe. 2nd. keyframe Move to the end of the animation - frame position 60. Move the camera down to the following position: x = 0, y = 0, z = 1130. The 2nd. keyframe is automatically generated in the animation-editor through the position change. For the camera-animation, therefore, we do not need call this a single time. Select the "framework" object again (if it has been deselected in the meantime). Operate the "Camera" button again to line the camera up exactly on the object. Increase the picture-detail by setting "Zoom" parameter on 58. In principle that is all that is required for the camera. We want to install one more additional effect, however. From keyframe 1, the camera turns through 90 degrees about its own longitudinal axis until it lines up exactly horizontal again. Return to the start of the animation on frame 1 and give a value of 90 degrees for the parameter for the "Camera - Roll". Now it is time for a preview animation with a view through the camera. Activate the camera-viewport window and operate the -button. The view through the camera-window shows the correct animation, however the visible detail may not correspond to the rendered animation, because window-size on rendering does not necessarily have the same resolution. To open the render window and preview the animation with the correct size-relationship you can operate the button in the button strip. To produce the final True Color animation you have to operate the button. Light, Background and Picture-Parameters All lights and the background already exist in the object-file. There are two parallel light objects, whose incidence-angles and light-colors in this demo animation are not animated. You could experiment with them a little by generating a new keyframe for the light objects in frame 60. You need only move to the end of the animation, then call up the light dialog and change the parameters for the light incidence-angle or their color. A keyframe for the relevant light object is again automatically generated. The light parameter between both start and end keyframes are interpolated in the animation calculation. The sky can also be animated in the same manner - as you can easily try out. Choose a simple sky for the background with color path of Black to Blue. As the scene is quite simple we can put in for picture settings everything that the program offers in the Render Options dialog - switch on Raytracing, Shadow and Reflection at a recursion depth of 2 and antialiasing on step 1. That's it. Now once again start the animation-calculation. The example-animation shown here is very simple and uses only a few of the functions of CyberMotion. For detailed information on each function you should study the chapter on "Animation." .topic 61 In this tutorial we set up a rather complex robot animation using object hierarchies. The object-files "robot_a.cmo" and "robot_b.cmo" for this project are in the "project" folder under the directory "robot". The complete animation defining the complex movements that the robot should execute can be set up in 10 - 20 minutes with the ready-constructed objects using the principle of object hierarchy. It is assumed that you are already familiar with the basic operation of CyberMotion and preferably have already worked through the tutorials "Hall with Columns" and "Simple animation". The objects: In the picture above left you can see the completed assembled robot. Load file "robot_a.cmo." This file contains the separate objects of the robot in dismantled form and serves as the start point for the tutorial. In the illustration above right you see the Top view of the separate parts. Here are all the objects again, seen in the Right view. The names of individual objects are shown to give a clearer understanding. The robot has an "arm1," which rotates in the "base" about its vertical axis and three additional joint-arms, namely "arm2," "arm3" and the object "joint." At this last joint is installed the "clutch" consisting of a pivot and a slide, which can be seen more clearly in the Top view. The slide can turn about the vertical axis of the clutch with the aid of the pivot. Lastly there are two objects "jaw1" and "jaw2." These jaws will themselves later move up and down within the clutch slide. Before we go on to build the robot we must take some precautions. The principle of object hierarchy is that the different lower hierarchy objects follow the movements of the higher hierarchy object. However, the lower hierarchy objects can have movements and rotations of their own. In our robot-example this happens when we turn the whole robot just by rotating "arm1" - all subordinated joints will follow this rotation - while at the same time the joints still perform further rotations about their different joints, or the jaws move in their slides. The rotations of the objects are always executed about a defined pivot point, which is identical to the focus of the object-axes. So our first work-step in this tutorial will be to move the individual object-axes of each object to the joint position, about which the individual rotations are later executed. Moving the Object-Axes to Put the Pivot Point of the Objects at the Joint-Axes The alignment of the object-axes should always take place before animating an object. The object's axes system can only be moved or rotated in Modeling Mode. Once an object has been animated then a later displacement of its object axes entails also an unintentional change of the objects behaviour in the animation, since all changes of position and object-angles are recorded in reference to the object-axes. So change first to Modeling Mode, if you are not already there. Go into the "Move Object" mode and activate the viewport with the Right view. Select the object "arm1" with the mouse in the viewport-window. Select the object-axes button in the "Selection" box to make the object-axes and the respective pivot point visible. In the above illustration you can see "arm1" with its object-axes. "arm1" will rotate only about the vertical axis - which here is the object's own Y-axis. Positioning the object-axes is not necessary in this case. Change back to -object selection again and select "arm2". This arm is later connected to "arm1." The rotation of the arm follows the joint-axis at the lower end of "arm2". Switch again to axes selection, so that the object's axes can be moved again and position the axes at the joint at the lower end of "arm2" (see illustration). Now do the same with "arm3" next to "arm2", position the pivot point of "arm3" at the joint-center at the lower end of the arm. The pivot point of the "joint" object must also be moved to slightly below the mid-point of the object. The "clutch", which is later assembled to the "joint", will only rotate about its vertical Y-axis. Repositioning the pivot point is, therefore, not necessary. The clutch- "jaws" will only move up and down along the slide. Here again, repositioning the object-axes is not necessary. Assembling the Robot After the pivot points have been set up we can build the robot. This is quite simple. Firstly we put the "arm1" exactly into the middle of the base. This is best accomplished in the Top view. Then we change into the Right view. Position "arm2" so that its joint-axis is at the same location as the hole in "arm1", then move "arm3" to the hole in "arm2". The "joint" can now be placed relative to the hole in "arm3." Place the "clutch" with the slide in the upper half of the "joint". Finally, there remains only the clutch- "jaws." Select both jaws simultaneous and fit them into the top of the slide. The robot is now completely assembled and looks as shown in the picture at the start of this chapter. In the foregoing illustration you can see an enlarged detail of the ready-assembled "clutch" with "joint," and the objects "jaw1" and "jaw2." The Structure of the Hierarchy Tree The robot is now assembled - however the individual parts are still completely independent objects. Next, therefore, we must lay down the connections between the individual objects in a hierarchy tree. Call up the Select Object dialog. The object "base" serves us as root object, which precedes all others in the hierarchy. That is, if the "base" is later selected and moved then all the other objects are automatically moved with it. Immediately subordinate to the "base" is "arm1". Click with the left mouse button on "arm1" . While holding down the button a box containing the name of the object appears. Slide the box over word base until a tool-tip indicating "Link" appears. Release the button and "arm1" is now displayed to the right side of the "base" in the window and is subordinate to that object. Next we arrange the object "arm2" under the object "arm1". Then arrange "arm3" under object "arm2," then the "joint" under "arm3" and the "clutch" under the "joint." Finally both "jaws" are subordinated to the "clutch". The complete hierarchy tree looks at the end as follows: The Robot Comes to Life In this tutorial we will carry out all movements just by selecting the individual joints and rotating them about their object axes. All children linked to the respective joint will automatically follow the rotation. This process - rotating a parent together with its children - is called Forward Kinematic. There is another choice of aligning a chain of joints in a hierarchy that operates the other way round - the Inverse Kinematic. Using Inverse Kinematics you can align a chain of multiple joints just by "pulling" at a child joint and all parent objects will automatically try to follow this movement by rotating in angle positions, that allow the child to move in the desired direction. But previously you have to define degrees of freedom (DOF) for each axes, so that the individual joints don't break out of their hinges. A demo file with initial values for the DOFs is provided under "\projects\ik\ik_robot.cmo". You can find out more about Forward- and Inverse Kinematics in the corresponding chapter refering to Kinematics. Nevertheless, in this tutorial we will carry out everything with simple rotations in the Rotate work-mode - always rotating a parent with its children - first, to exercise important working methods, and second, because of the fact that when several joints are involved in the movement at the same time, the result often doesn't comes out as planned. Now and then you will have to correct arm positions with Forward Kinematics anyway. Even the makers of the great 3D-animation films often do without any Inverse Kinematics in order to maintain full control of the motions. Bringing the robot to life after it has been assembled and connected together hierarchically is not difficult. Firstly, we want to bring the robot into a parked position, from which the animation should start. In the start position all the joints in the robot are turned through 90 degrees and are seen from the side in the Front view. Go into the "Rotate Object" work-mode and choose the Right view. Select the "arm1" with the mouse. All subordinate objects down to the jaws are automatically selected with it. In the "Axes of Rotation"-box select the -object-axes for the rotation axis and give a Y- angle of 90 degrees for the rotation. Operate the button to initiate the rotation. Of couse you can also carry out this rotation with the mouse directly in the viewport. Select the Y-axis in the "Axes of Rotation"-box and then click in the viewport and move the mouse while holding the left mouse button pressed. The angle about which the object is rotated can be read again in the "Axes of Rotation"-box. In the Front view you can see the robot now from the side. Now all joints of the chain have only to be turned through 90 degrees. This is quickly dealt with. Select the object "arm2" with the mouse, enter 90 degrees for the joint angle about the X-object-axis (assuming the old angle-input has previously been deleted with ) and confirms the rotation with the button. The result is pictured above. Select "arm3." This also is turned about the same angle, and you again need only press the button. Then the "joint" is angled and once more the rotation confirmed by . The robot now stands in its parked starting position. Now the work of animation begins. Change now into Animation Mode by selecting the corresponding -button. When you select an object in a hierarchy and rotate it then in the animation editor automatically a rotate track with a corresponding rotate key holding the axes alignment and rotation angles in this frame position will be created. However, this key will only be created for the particular parent object you selected in the hierarchy but not for its child objects - they will follow automatically their parent's movements. This saves a lot of keyframes and is useful in many situations, but for our robot animation this is more a handicap. An example: You move the robot arms about several rotation steps in a destination position. Once the arms have approached this position the jaws are supposed to close in to grab an object. Therefore you move forward in your animation and then move the jaws within their slides to meet together in the middle. Now, when you play the animation you realize, that the jaws already start to move in their slides from the beginning of the animation instead of from that moment in time, when the roboter arms have reached their destination point. Since you moved the jaw objects only at the end of the animation, only at this moment keyframes were generated for the jaws. The time-sequence for the jaw-movement is therefore defined by their initial start position up to the keyframes generated at the end of the animation. But we want that on every keyframe position all joints and the clutch with its jaws will hold exactly those positions that we arranged for them in that particular keyframes. Therefore we have to activate in the animation button-strip the automatic creation of keys for all objects of a hierarchy. This is the last -button at the right side in the animation button-stripe. Furthermore, always position- and rotate-keys are to be created jointly, even if an object is only moved or only rotated. Therefore choose also the corresponding track buttons in the animation button-strip. Animating the roboter: Key2 / Frame 11 Press the button once with the mouse to insert a further 10 frames at the start of the animation. The individual rotation-steps: Select the relevant object and then make the rotations about the relevant axes, as listed. The selected object is always rotated about the object's axes. arm2 X - axis + 30 degrees arm3 X - axis + 30 degrees joint X - axis + 120 degrees The result of these actions is seen in the foregoing illustration. Key3 / Frame 21 Operate the button again to insert a further 10 frames at the front, to move to frame position 21. Rotations: arm1 Y - axis - 45 degrees arm2 X - axis - 45 degrees arm3 X - axis + 90 degrees joint X - axis - 45 degrees Key 4 / Frame 31 Move to the frame position 31. Rotations: arm1 Y - axis - 45 degrees arm2 X - axis - 45 degrees arm3 X - axis + 90 degrees joint X - axis - 45 degrees The result of these work-steps is best viewed in the Right view. Key 5 / Frame 41 Finally, in the last two keys of the animation we want to show the operation of the "clutch". Once again, go 10 frames further with the animation to frame position 41 Rotations: Clutch Y-axis + 90 degrees Note that you select the "clutch" and not the object "joint." The clutch turns through 90 degrees about the "joint" and brings the slide with it into the position pictured above. However, in addition the "jaws" themselves move in the clutch. Go into the "Move Object" mode, and now position the "jaws" about 30 units closer to each other in Y-direction. This makes clear a further advantage of object hierarchy, because it is not only the rotations and movements of the parent that are transferred to child objects. Child objects, which move along a path themselves - as here the jaws move along the slide - retain their own movement path, so if the parent moves and rotates the movement path also moves. You can see during the animation that despite the rotation of parent "clutch" the "jaws" always move up and down within the slide. Key 6 / Frame 51 The last Key is on frame position 51. It is again concerned entirely with the movement of the "clutch" Rotations: Joint X - axis + 90 degrees Clutch Y - axis + 90 degrees In the last Key, the "joint" lifts the "clutch" to the top, while this itself turns further about its Y-axis and the closed "jaws" open once more. You must enter the "Move Object" mode and retrace the "jaws" movements. There - because the clutch has turned - this time the "jaws" are moved 30 units apart along the X-axis. This done, the animation of the robot is ready. Using the play-buttons above the navigation button-strip you can now see a small preview animation and admire the little dance of the robot. What is still missing are the settings for camera, light and textures. The file "robot_b.cmo" includes the complete animation with all settings used to create the demo animation. See also: Animation .topic 67 The following examples are present as "CMO" project files, which you can load and experiment with the settings as you please. For a description of the basics of particle systems and the particle editor see: Particle System - Overview Example 1: Sparkler Example 2: Grass-Meadow with Dandelions Example 1: Sparkler You will find the "CMO" file for this example in the folder "/projects/particle/sparkler/sparkler.cmo". The sparkler-animation is an example of how to produce a lot from a little. The particle-action consists of only an explosion-like particle animation, which constantly emits small triangular particles from the "burnout-point" of the sparkler. You can look at the parameters in the particle-editor if you have loaded the file. The actual trick of the animation is based on the sparkler burning down and the spray of sparks. The sparkler burning down: In addition to the stick, the sparkler consists of two identical rotation-objects, which are provided with different materials and overlap, one covering each other. One object is given a bright-gray porous texture - the sparkler before it burns down. The other object is given an even-more porous black-gray surface - the sparkler when it has burned down. The "burned down" object is fractionally smaller, so that it lies slightly under the surface of the un-burned object. The trick of the rod burning down is based simply on scaling and movement. The animation is 61 frames long. We therefore go to frame 61 and scale the outer envelope of the rod, which covers the inside burnt-down rod, down to approximately half the length. Then we move the rod so that the bottom ends of both rods are covering each other. Now the upper half of the burnt down rod is exposed, while the un-burned down rod still masks the lower half of the rod. If you start a preview animation now you will see how the outer envelope slowly moves down and exposes the burnt rod. The glowing point: For the downward-moving glowing point we use a lamp-object with the option "Visible Light-Source" switched on. In frame 1 the light-source is placed directly in front of the top of the rod of the sparkler. At the end of the animation (frame position 61), the light-source is moved down to a point in line with the upper edge of the un-burned outer rod of the sparkler. Start a preview animation and you can see how the light-source moves with the upper edge of the outer rod. The spraying sparks: As reference-object for the sparks we use the simplest 3D-object - a triangle. It moves only in the top of the candle-rod area. We need not define a further key if we arrange the particle-object under the "glowing point" in the hierarchy. The reference-object then moves down with the light and new particles are always generated and emitted at the height of the glowing point. To generate this wonderful star-spraying sparking out of the simple triangular particle, we simply use the "Sparkle" function, which you can switch on in the Render Options dialog. In order that shining-stars are rendered on all the particles: 1. The material of the particle reference-object must be highly reflective. 2. The particle triangles must directly face the light-source, so that the light-source is mirrored in the triangles and produces the shining-point intensity on the triangle surface. Here, the parallel light-source of the scene, which lines up almost front-on to triangles as seen by the viewer, fulfills these conditions. However the spin option must not be switched on for the particles or you then get a sequin-like effect, as the particles will only shine at the moment they reflect the light of the light-source directly into the camera. Example 2: Grass-Meadow with Dandelions You will find the "CMO" file of this example in the folder "/project/particle/meadow/meadow_anim.cmo". Before you go through this example please load this file and see for yourself the parameters described in the program. Also refer to the Particle Editor. Those of you who have generated plant-like 3D-models using L-systems would probably suppose that the scene pictured above uses objects that are also generated by such a system. However, the appearance is deceptive - the stalks of grass and the seeds of the dandelions are generated by particle actions. This is immediately apparent if you look at the completed animation and can see how the flowers and grass sway in the wind then the seeds of the plant detach from the stem and drift away in a flurry. Here you can see the model-structure of this "complex" scene. All that we need is a grass-stalk, which is copied 1000 times for the particle-action, and a rotation-body for the plant's stem. On this plant's stem is a seed, which is also animated with the particle-system. The particle-action for the grass: Look at the parameters for the grass production in the Particle editor. The particle-action runs over the whole animation from frame 1 to frame 100. There are 1000 additional grass-stalks generated in the X-, Z- level in an area ±600 area-units around the reference grass-stalk. To bring some disorder to the meadow, the grass-stalks are randomly scaled in an area of 0.6 ± 0.25. The grass-stalks are also rotated - a full ±180 degrees around the vertical Y-axis, but only ±15 degrees the X and Z-axes respectively (higher values would lead to grass-stalks lying horizontally or to them growing over each other). Note: before animating the object-axes of the grass-stalk must be moved down to its base so that all rotations are about these axes. (If the object-axis were to remain in the middle of the object and a rotation is executed the grass stalk would not appear to be attached to the ground.) The grass meadow should wave in the wind. We animate the reference-object so that it leans a little to right in some keyframes and is upright in other keyframes. We need only switch on the function "Overlay movement of the reference-object" for that this movement to be copied by the particles. The two particle-actions of the dandelion: We need two particle-actions for the dandelion: one for the first 43 frames - in which the dandelion waves gently in the wind like the grass - and a second in which the seeds leave the stem and blow away. The first particle-action runs from frame 1 to 43. 100 copies of the reference-seed are generated and arranged in a hemisphere about the reference-object itself, and have their roots in the center of the semicircle. Again, we must move the object-axes of the seed-object to its base. We then obtain a hemisphere for the newly generated particles by rotating ±90 degrees about all 3 axes. In principle that is all. However, the flower waves in the wind exact like the grass so we animate the dandelion-stem the same way as the grass reference-object. The seed object is then simply hierarchically subordinated to the particle-editor and, again, switch on the option "Overlay movement of reference-object". The second particle action: In the second particle action the seeds leave the stem and are whirled through the air. Firstly we copy the first seed particle-action. The remaining frames of the animation (frame1 44 to 100) are the frame area. Here, we switch off the function "Overlay movement of reference-object". Now it only remains to input the movement parameters. The seeds should detach from the stem when leaning to the right. The +X object-axis of the blossom already leans to the right. You can check this in the "Rotate Object" menu, by selecting the "Axis - object-axis" option. In the same menu we turn the object-axes about 35 degrees around the Z-axis, so that the X-axis is inclined to the right as shown above. Now, in the particle-editor, we choose precisely this +X-axis as the movement-vector and enter a starting speed. For the chaotic swirling we switched on the turbulence and the whirl-function. Do not forget to switch off the seed reference-object before rendering the animation, otherwise the seed-particles fly from the stem and leave the solitary seed reference-object sticking to the stalk. See also: Particle Systems .topic 101 This tutorial introduces the new landscape functions. Our objective is to create a scene that resembles as closely as possible the scene shown above - hilly terrain that merges into mountain scenery with a small stream running through a pass between the mountains. Creating the Landscape Object Select the menu entry "Object - Landscapes..." or in the button strip to call up the landscape editor - a dialog for the production of fractal landscapes. These landscape objects are based on a rectangular grid and height information calculated with a fractal algorithm from the grid coordinates. In the editor click on the tab to switch to the page with the basic settings for the landscape definition. The basic parameters define the fractal pattern and the dimensions of the object: Range - The terrain we want to create covers an extensive area. With the Range parameter we can zoom out of the fractal structure used to calculate the height map. Set the Range parameter to 0.51 for our scene. Flat Edges - With this option switched on the edges of a landscape object will smoothly run down to ground level. As we do not want to smooth down too much of the mountain details at the edges we reduce the area to be influenced by this function to a small band by entering a value of 0.34. Random - The Random parameter enters different initial values for the fractal pattern and enables many variations in the calculation of the object. For this tutorial enter a value of 0.63. Smooth Slope - The lower parts of the landscape still appear a little bit too much jagged to represent smooth and hilly valleys, so we increase the Smooth Slope parameter to a value of 0.69 to smooth down the jagged appearance in the valleys. Peak - we leave the default values for the Width and Depth dimension (each 10000) as they are but we want the mountain peaks to be raised a little more. Therefore we enter a height value of 2000 for the Peak parameter. The illustration shows the temporary result as represented by the preview window. Our aim is to generate a hilly terrain progressing into mountain scenery but there are still too many hills and mountains in the front area. We will remove the peaks later using the painting tools provided in the work mode, but first we have to set a higher resolution for the landscape object. Since each change in one of the basic parameters calculates a wholly new fractal height map from scratch, you can't change one of the basic parameters after editing the height map with the painting tools. Any modifications made using the painting tools would be obliterated when the pattern was newly calculated. Resolution - Provided that there is enough RAM available on your computer we go the whole hog and enter a grid resolution of 700 * 700 points. Above the Resolution parameter we get the information that this results in a landscape resolution of almost a million facets, exactly 977,202. If you are low in memory (under 256 MB RAM) you should enter a somewhat lower resolution, for instance, 250 * 250 points (ca. 125,000 facets). Now change via the tab to the corresponding work mode in the landscape editor. With help of the painting tools on the edit page we want to clear away some of the hills in the foreground of the height map to get a more flat and hilly terrain in this area. You can paint directly in the preview window to raise or lower the ground underneath the brush. To flatten out the bumpy hills we choose the work mode by selecting the corresponding button on the page. For the Brush Radius enter a value of 0.3 (corresponds to 30% of the grid length) and for the Strength of the effect enter a moderate value of 0.5. In the illustration above the area is marked where some of the mountain peaks have been taken away by the "Lower" painting effect. With the next step we want to smooth out the still jagged appearance in this area. Therefore we switch from to work mode for the brush effect. This will level out the area underneath the brush and thus smooth the ground without removing too much of the detail. For the Brush Radius enter 0.25 and for the Strength of the effect 0.3 is sufficient. The height map after applying the "Average" function. Example for the "Raise" working mode: A continuous mountain ridge was created in the illustration above with help of the "Raise" function using a very small brush radius. What is still missing now is the little stream splitting the terrain. The course of the river can be inserted again with help of the "Lower" brush. But first we switch back to the page and there we select the option. Then go back again to the page. When we now apply the "Lower" function by painting the river course into the height map, all facets falling below the Clipping-Height will be removed and the course of the stream becomes clearly visible. Switching off the preview option, so that only the height map without lighting is displayed, can be helpful to determine the exact course without confusing shadows. Finally, go over the river course once again with the "Average" brush, to smooth down the steep river banks caused by the "Lower" brush. The main part of the work is done. Now switch back once again to the page in the editor. At the bottom of the page you can decide to split up the object on generation in several separate objects. This will considerably speed up the rendering process when the high quality raytracing algorithm with shadows and reflections is used for the picture calculation. The more complex the landscape object is, the higher the number of separate objects in the corresponding selector box should be. For instance, a speed up of 10 times faster rendering can be achieved by splitting up a landscape object consisting out of 1 million facets into 25 separate objects. Therefore, for our very complex scene with shadows and the terrain mirroring in the river we select also an appropriate value of 25 in the "Divide into separate objects" select box. The separate parts of the object will be hierarchically subordinated to a parent object. For the parent object the highest part of the landscape will always be automatically chosen. On the one hand this makes it easier to select the whole landscape object for working because you just have to select the highest elevation in the terrain to mark the parent and with it all children. On the other hand this is important for the later texturing of the terrain. The special terrain textures depend (amongst other things) on the overall height of an object. For all subordinated objects, the material relates to the same heights on the parent object. Now, operate the button to generate the landscape objects. Since the option was also selected, an additional plane object on ground level will automatically be created with the terrain objects. Depiction of the landscape object in the viewport windows. This part on object creation is completed. The landscape remains exactly where it is and won't be touched again. Instead we adjust the environment settings, like camera, background and lighting to the dimensions of the landscape object. This way you can go back to the landscape editor at any time and make some further adjustments to the height map (For that purpose you should always save your landscape settings to the visual landscape library). Then, you can create a new terrain object to replace the old one without having to adjust all settings for the rest of the scene again. Camera - if the option "Set Camera and Atmosphere" has been activated on object generation then the camera will be positioned automatically on a good starting point in front of the landscape. Let's change now to "Move Object" work mode. Reduce the Zoom-parameter for the viewports to 4% so that the whole scene can be displayed in the viewport windows. Call up the object selection dialog and mark the camera object with a click with the right mouse button. Back in the "Move Object" work mode again, click in the "top-view" window and move the camera - while holding the left mouse button pressed - to the front left corner of the terrain object. Then proceed to the camera menu. For this vast terrain we need a panoramic wide angle effect and therefore we adjust the camera zoom to a rather low value of 35. We want to take a picture from a rather low position directly above the flowing river with the camera directed along the river towards the mountain pass and slightly inclined to the zenith. Therefore we enter a value of +3° for the camera inclination. The camera direction we adjust simply by clicking in the angle instrument for the direction and dragging the needle into a position that points to the north east. In the "top view" window you can easily follow the alignment of the camera - a dotted line originating from the camera shows the line of vision. Now comes the fine adjustment. In the camera menu select the option "Move Camera along" . Click in the camera viewport and move the camera, while holding the left mouse button pressed, into a position that represents a view similar to that depicted in the illustration above. Light and Background Now call up the background dialog to select an appropriate atmospheric background for our terrain. To simplify matters we just select an existing background from the visual background library. Double click on the "golden sky" entry, a warm evening sun with a dense haze at the horizon. At the start the "Panorama, only planes" preview mode is always selected in the background dialog. Now, of course we want to see a preview in "Camera, complete scene" mode to get a real impression of the interaction of the atmospheric background with the terrain from the current camera view. But first we have to take some precautions with a very complex scene of about one million facets, otherwise the rendering would not be fast enough to speak of a real preview rendering. Select the raytracing algorithm without shadows and antialiasing for the preview rendering (the second of the four spheres beneath the preview window). Additionally you can reduce the picture resolution by clicking on the magnifying glass button. Switching off the automatic preview update with a click on the button is also indispensable for such complex scenes. This way you can adjust several parameters in one go without having to wait each time for the preview calculations. You can start a preview rendering any time by operating the button then. Nevertheless, if a preview calculation lasts too long you can interrupt the preview any time by pressing the key on your keyboard. A first preview picture of our landscape scene in the background dialog. It looks quite nice already but an impression of real depth and distance is lacking. The mountains are supposed to merge much more into the distance with the atmospheric haze. As a light ray traverses an atmosphere some light is extinguished and some light may be added by emission and scattering. This yields in a change of color with distance, i.e. dark backgrounds becoming bluer and light ones redder. All these effects can be simulated using the atmospheric fog and color filter functions in the dialog. First we enter the fog parameters on the "Atmosphere"- side of the dialog. The button is activated so we just increase the Density to 0.20. Like in real life the fog density in CyberMotion is decreasing with increasing height. At Ground Height you have the maximum density and a second Height parameter defines the maximum fog height at which the density is almost zero. The ground of our scene is located at a height of zero and the mountain top at about 2000. So we enter 0 for Ground Height and 2000 for the maximum fog height. This results in a dense fog layer surrounding the foothills while the mountain peaks are clearly visible towering above the fog. Now, change to the "Atmosphere"- side of the dialog. For the sunset effect we enter some additive blue (0.10) and a red filter value of 0.04. After adjusting the atmosphere parameters we have to set up a proper lighting for the scene. The sun light object loaded with the "golden sky" background from the library is a little bit too low and partly hidden by the left mountain. Leave the background dialog and call up the light dialog instead. Basically I'm satisfied with the sun settings, I just want to move the sun a little bit up and to the right, so that it peeps out right behind the mountain. We can deal with that quickly by adjusting the Inclination and Direction parameters of the parallel light object. In the light dialog at the start the preview mode "Lensflare, centered" is selected. This preview mode is best suited for displaying activated lens flare effects for a selected light object. For the adjustment of the sun position the "Camera, background and planes" preview mode is preferable. Now, you can click directly in the angle instruments for the inclination and direction of the light incidence angles and drag the needle to a suitable position. Since no terrain objects need to be drawn in "Camera, background and planes" preview mode every change in the incidence angles is shown instantly in the preview window. For the inclination angle in my demo I've input a value of -13.3° and the direction angle of the parallel light source has been set to -110.7°. Now select the "Camera, complete scene" preview mode to get an impression of how the new light settings affect the whole scene. The picture appears a little bit to dark as the sun is still very low on the horizon. Instead of increasing the inclination angle to move the sun further up in the sky I prefer to add a second parallel light object. This light object will act as a supplementary area brightness (in addition to the ambient light) simulating the general light coming in from reflections in the atmosphere. Consequently we don't switch on the option for the second parallel light object (yes, you can set up several suns simultaneously in the background). Instead we activate for the light source in order to prevent time-expensive shadow calculations for a light source that only acts as additional area brightness. For the incidence angle of the light enter a value of -32.0° for the inclination and 57.7° for the direction. The light color ought to be a very dark gray contributing only a little additional intensity to the scene. The preview displayed in the light dialog after adjusting the light settings. You may wonder why I did not just increase the intensity for the ambient light object instead of adding a second parallel light object for the additional area brightness. That's because of the material settings we want to add later. No matter how much points and facets are used to build a complex terrain object, the real impression of detail comes with a good surface texture. These textures ought to provide not only color patterns as realistic as possible but also an impression of bumpy and irregular surfaces. This is dealt with by normal distortion The surface normal is a vector standing vertical to the surface and is used to determine the surface brightness in respect of the light-incidence. Distorting the surface normals allows a raised appearance to be added to the surface structure. And that's the point. As the ambient light object acts only as an additional intensity value representing the general area brightness - without an origin there are no incidence angles for the light - you can't use it to emphasize bumpy structures calculated from normal distortions. Material Finally the material settings for the terrain and the water plane. Leave the light dialog and switch over to the material dialog. For the plane object we simply choose the "rippled" material from the visual material library on the right side of the dialog. This material represents an appropriate water texture for our river but still we have to adjust the flow direction of the streaming water. In our scene the camera is directed north east along the river passing through the mountains. Therefore the flow direction of the river should also point in that direction or exactly the other way round, up- or down-flowing. In my example the river flows towards the camera with a direction angle of -131.3°. When creating a landscape object the color range used to display the height map is also automatically assigned as a color range texture for the object. You can use such a color range texture, for instance, to simulate sedimented rock layer structures. However, for our mountain scenery a fractal rock texture is preferable. The "Landscape" entry in the visual material library is a suiting material prepared especially for this scene. As mentioned earlier in this tutorial you only need to load the material (double click on the "Landscape" library thumbnail picture) for the parent terrain object in the hierarchy - all other subordinated terrain objects are referenced to the parent's material settings. The "Landscape" material defines a rock pattern based on a procedural fractal noise texture. However, the material is far more complex than this. Select the tab in the material dialog to get to the material side for additional terrain texture layers. On this side you can define up to three additional fractal texture layers for the object. The way in which these layers are applied is dependent on the slope angles and the height of the surface. For instance, you can define a white snow texture that covers only areas that lie high in the mountains and have moderate slopes. A random distortion and blending parameters provide additional irregularities and smooth transitions. For our "Landscape" material the following three layers were applied: Layer 1 - (Soil) - Right at the bottom lies an earth-colored ground layer that reaches up only to the base of the mountains, mainly to cover the steep and grassless banks of the river. Landscape preview after applying the soil-layer. Layer 2 - (Grass) - Above the soil layer lies a grass layer. Besides the green colors a highly modulated normal distortion takes care of an appropriate noise on the grassy surface. Additionaly the function has been activated, so only patches of the grass layer will show on the ground, mingling with the underlying soil layer. Landscape preview after applying the grass-layer. Layer 3 - (Snow) - (see illustration at the beginning of the chapter) - Right at the top of course the snow layer. One word on the Height parameter: it is specified as a fraction of the overall height up to which height the texture is applied. However, usually snow lies in higher areas and disappears in lower and warmer heights underneath the snow line. To take that into account you just have to enter a negative value for the height parameter. The height calculation is then reversed, starting from the top of the mountain and running downwards towards the ground. Again the function has been switched on to add more complexity to the texture layer. Further details describing the individual parameters for landscape textures are provided in the corresponding chapter: Material Dialog - Landscape Textures. Ok, that's it. Go to the Render Options dialog and select raytracing with shadows and reflections for the rendering and choose a suitable picture resolution. Start the rendering and enjoy your work. See also: Landscapes and Planets .topic 81 This tutorial demonstrates the use of animated object deformation. You will find the associated animation file "dolphins.cmo" in the folder "projects/dolphins". To understand the complete demo, you should at least have a basic understanding of CyberMotion - especially its animation functions. Also, it won't hurt if you have also read this manual's section on Animation. In this animation, the camera is fixed at a point below the water surface. A ball is drifting in the moving waves right in front of the camera. At the same time, a dolphin approaches the ball and flicks it up with his nose, followed by a steep dive. This results in a swirl of air bubbles, pulled into the water by the diving dolphin. Finally, the dolphin passes the camera with small movements of his flippers. Actually, this is a very basic animation, employing two planes - for the seabed and water surface, respectively. The texturing is outlined further below. We will start with the most complex part of this animation, ie. the dolphin's movement. This screenshot shows the animation paths of the dolphin (yellow) and the ball (white) - as seen from the right. Both the dolphin and the ball are moving along smoothly curved paths - the dolphin controlled by its swimming movement and the ball by the bobbing movement of the waves as well as the impulse introduced by the dolphin's nose. This smooth overall movement is made possible by switching to the animation editor right at the start to activate B-spline interpolation for both the dolphin and ball. This function is automatically activated for each new keyframe, so we don't have to be concerned with it anymore. This is all it takes, so we can now start moving the dolphin from keyframe to keyframe. We use the animation button-stripe to jump 10 frames forward in our animation and move the dolphin to a keyposition to make a nice and neat curve as it nears the water surface, where it will hit the ball later. While the animation for the dolphin might look elegant enough at this point, the object itself still looks rather stiff and not very life-like. We will change this immediately using the "Deform Object"-working mode. Here, we will deform the dolphin smoothly at the key positions to resemble the screenshot above. When the in-between frames are calculated later, the movement should look realistic and natural - just like a live dolphin swimming. Activate the bend function in the "Deform Object" menu. The body of the dolphin is along its x-axis - so we will use this as the axis for deformation. The body is bent up and down along the z-axis, so you will have to select "Bend Object - About Axis Z". Now, we can use the mouse to bend the dolphin up- or down for each animation keyframe. Once this has been done, it will result in a smoothly curved animation path whose key positions should sit right at the peaks of the movement curve. When you preview the animation at present, the swimming motions will not look very realistic - instead, the dolphin will just flex its body while zooming through the water like it was hurt. So we will have to change the direction at each of the key positions, which is done by adjusting the animation at the exact middle frame between every two keyframes. Erase the deformation by setting the deformation parameters to zero at each of these points, so the dolphin is balanced along its horizontal axis. Then use the "Rotate Object" working mode to rotate the dolphin around its z-axis to align it with the animation path so it will actually dive to the seabed. See the screenshot showing part of the animation with its keyframes. If you render a preview now, you will see that the dolphin has actually learned to swim with entirely realistic movements. Now we return to the point in your animation where the dolphin approaches the water surface to play with the ball. Bend the dolphin in a way that makes its nose break through the water to hit the ball. Generate the key positions in a way that the ball moves in a line but also slowly wobbles up and down as it approaches the point where it meets the dolphin's snout. Generate additional keys to have the movement proceed linearly, but with a more extended vertical movement to simulate the flight of the ball. Also, the ball should be submerged somewhat after its re-entry into the water and before it starts wobbling with the wave movement again. Now return to the key position where the ball is hit and start the animation editor. For the ball, enter a negative acceleration of ca. -0.50 in the "From Key"-parameter. This will make the ball go slower with the height increasing (you could also say that the movement will decrease), simulating the influence of gravity. In the next key - at the point of return - we intensify this effect by entering again a negative acceleration in the "To Key"-parameter, for the approach to this keyposition, and additionally a positive value of 0.50 for "From Key", for the behaviour after the key position when the ball begins to fall again with increasing velocity. This is all it takes to animate the objects, but you might want to make fine adjustments by repeatedly previewing the animation to correct the behavior in each keyframe, changing object distance, position or deformation to achieve a smooth and realistic flow. Particle Animation Once the dolphin has stopped playing with the ball, it dips below the water surface, taking a swirl of air bubbles with it. This effect is created using the particle animation features. Construct a small transparent sphere as the particle reference object and place it right in front of the dolphin's nose. Use the Select Objects dialog to put the bubble into the dolphin's hierarchical branch, to make it move with the dolphin's nose. Don't forget to deselect the bubble, as the reference object should be invisible during the animation. Now proceed to the particle system editor. Start a new particle action, named "Bubble1", and select the bubble object as the particle reference object. The other parameters: Particle object - Number: 15 ±3 particle objects (air bubbles) should be generated for each frame. Particle system - Range of frames: In frame 51, the dolphin's snout breaks the water surface, so this is the exact point where we want to start the particle animation, which is running for more than 60 frames. Enter frame 111 as the end of particle action. Create new particles all 500 frames during a period of 13 frames with a Lifetime of 30 ±5 frames. This means that the particle action started in frame 51 will generate between 12 and 18 air bubbles for 13 frames, all of which will be destroyed after 25 to 35 frames. Since the bubbles should go "up" after they are generated, we will select negative gravity under the "Particle-Action" settings. Add a small amount of rotation to make the particles behave just as a real stream of bubbles would. Coming Together - Material, Light and Background The swimming dolphin is only one of this animation's eye-catching features, as its impression also much depends on the moving water surface and the light reflecting off the seabed. Waves We will now deal with the material settings for the plane object "surface", which is used to mimic the behaviour of a water surface as seen from below. In the material editor, chose a crystal clear white object color and select transparency. You won't have to filter the light coming from above by adjusting the object's material settings, as it is more convenient to use the light and background atmosphere properties. First, switch off the plane object's shadow by selecting . Change to the page in the material dialog and activate the function with 0.11 as the recommended setting for scaling (producing a widely spread wave field) and set distortion to a moderate value, like 0.38. Now select the "Waves" animation feature and enter a slow speed of around 0.18 for smoothly rolling waves. To match the ball movement with the water flow we adjust the "Waves" flow direction to -90 degrees. Sea Floor and Light Reflections from the Moving Water Surface For the sea floor we apply a tiled bitmap of a sandy ground texture. Now we use a special trick to simulate light shining through the moving water surface and being reflected on the sea floor. For this purpose we simply switch on the function for the sea ground too, this time with a wider spread for the scaling (0.03), a distortion value of 0.2 and with a maximum speed of 1.0. Although the sandy sea ground is very rough we enter a high reflection value of 0.6 but do not select the button, since we do not want real mirror effects on the ground. The highly specular surface in combination with the "rolling" ground results in moving highlight reflections resembling much to the light reflections coming from the moving water surface. Background In the background editor, we select a 3-D sky for this scene, which makes the shaded tint shine through the water surface. Additionally, we can also make good use of the atmospheric fog and color filter settings for this background model, as it allows us to make the scenery fade away into the distance as is natural for a submarine environment. The atmosphere effect works by filtering the light depending on distance, and, without this feature, the scene would not really look like it took place in a murky, underwater environment. Light Two parallel light sources are used for the main illumination in the scene. The first simulates the sunlight coming from above and the second parallel light points to the opposite direction in an angle against the bottom of the water surface simulating light reflected from the sea floor underneath. Consequently a darker shade of gray is chosen as the light color and the shadow generation switched off for this source. Finally, the ambient light source is switched on, to take into account the light that is scattered around by the many particles floating in the water. To soften the under water shadows we enter for the sunlight a radius of 100 and a number of 21 shadow sensors. Rendering Parameters The animation is rendered using raytracing with options , and switched on. .topic 650 Many thanks to Pascal Heußner, who wrote this tutorial. This tutorial explains the basic principles of character animation. We will animate a simple figure constructed only from some bent cylinders for the body and a sphere for the head. All parts were joined together using boolean operations resulting in the little hero you see in the picture above. This tutorial can not at all explain all the possibilities you have at hand with CyberMotion. If you want to animate more complex figures and movements you have to read everything about animation. Now load the file "projects/character/tutorial_character.cmo". It contains the little man for our tutorial. Creating the skeleton Before we can start animating the figure we first have to create a skeleton. Change into Edit Skin and Bones work-mode. First we need a root bone, from which the other bones will originate. The pelvic bone is best suited for this purpose. To create the root bone position the crosshairs at the pelvic position of the little man. Click on to define the starting position for the bone. This bone will be the root bone without length, so we do not need to drag a length out of the starting point. Instead we operate right away - the first bone will be added and the name dialog appears where you can give it a suitable name. Since a complex skeleton can consist out of dozens up to hundreds of bones, you should go to the trouble and assign unequivocal names for all bones. Now call up the Select Objects Dialog. Link the root bone "pelvic" hierarchically under the object "man". By linking bones under the "man" object this object is automatically recognized as a deformable skin for the subordinated skeleton. Little man with pelvic bone Leave the Select Objects dialog again. Now we add bone after bone for the rest of the body. Select the "pelvic" bone again. Grab the crosshairs with the mouse between its center arrows and drag another bone for the right thigh out of the pelvic. Position the crosshairs at the knee joint and operate . The bone "thigh_right" is created and the starting point for the next bone jumps automatically to the tip of the thigh bone. Drag another bone for the lower leg out of the thigh bone, press and name it "lower_leg_right". Select again the "pelvic" root bone and drag out the "thigh_left" and "lower_leg_left" from it. Little man with pelvic and thigh bones Now we have to insert the bones for the upper part of the body. Select again the "pelvic" bone and drag out another bone up to the middle of the upper body. This bone is named "lower_spine". From this bone we pull another bone up to the neck, the "upper_spine". Since our man has a rather slender chest we can do without shoulder blades. Instead we pull out from the tip of the upper spine the bones for both arms ("upper_arm_right" and "upper_arm_left") and forearms ("forearm_right" and "forearm_left"). What is still missing now are the bones for the neck and the head. So select again the "upper_spine" and drag out the bone for the neck and finally the head bone. The final skeleton While dragging out bones from other bones the hierarchy tree is build automatically, since new bones will always linked under those bones they were pulled out from. The hierarchy tree - The object "man" is the parent of the whole skeleton and serves as deformable skin. Directly subordinated to the skin is the "pelvic" root bone. From the pelvic originate all other bones. Allocating the Skin Points Each bone is to influence only a particular part of the skin, so before we can start to animate the character we have to allocate the skin points to the individual bones. For this purpose we change now to the "Edit Skin" page by clicking on the skin tab at the top of the tool window. We start with the right lower leg. Select the corresponding bone with the mouse and then switch over to -"Selection - Allocate Skin Points". Now select all points from the lower leg (hold key pressed when adding points to the selection, hold key down, when removing points from the selection). You can also use the "Add Points within Radii" function to add all points within the radii of the bounding cones for a fast pre selection and then remove or add points from this selection. The point selection for the "lower_leg_right". Now select the right thigh bone. To be able to do this you have to switch back again to -"Selection - Select Bones". Mark the thigh-bone with a mouse click and return to -"Selection - Allocate Skin Points" right away. All points that are already allocated to other bones are marked in green. But you can allocate points to several bones at the same time. When this happens, the point weight - or better say the influence of a bone on this point - is changing. The point weight will be distributed in equal shares to all bones that have a reference to this point. Now select for the bone "thigh_right" the corresponding points as shown above in the illustration. Repeat this working steps for the left leg. Then allocate points to the spine bones, the arm-, neck- and head bone: The "lower_spine" bone gets the lower half of the body cylinder, the "upper_spine" the upper half. Allocate the cylindrical area between head and arms to the "neck" bone. The "head" bone gets the whole elliptical head. See to it that all points of the skin are distributed to the bones, otherwise individual points will glue to their position, when the character is animated. The number of remaining points can be read in the tool window. You do not need to allocate points to the "pelvic" bone. The "pelvic" only serves as the root for our skeleton hierarchy. Animating the character The first part of the tutorial is done. Now we will animate the character. Move the "man" to the starting point of the running track. In the Right view the man is facing now to the left. Change now into Animation Mode. Right at the beginning we want to record the position and alignment of the "man" in a first keyframe. But previously we have to select the tracks for which the keyframes are to be recorded. In each keyposition we want the whole "man" with all of his subordinated bones to be recorded in a fixed position, therefore position- and rotate keys have to be created, always both at a time and for the whole hierarchy. Therefore we select the corresponding buttons in the animation button-stripe - position track, rotate track and key generation for all objects in the selected hierarchy. Select now the "man" in the viewport and operate the record button to create the keyframes for the character hierarchy. Now we move forward in the animation. For each foot step we need about 4 frames. Consequently we go forward to frame position 5. In this tutorial with only a simple skeleton without hands and foots a quite simple animation will be sufficient. Degrees of Freedom, Inverse- or Forward Kinematics are of no interest here. Actually, the complete tutorial can be carried through in the "View - Right" viewport window by simply rotating the individual bones clockwise or anti clockwise in the viewport plane and moving the character along the running track. Change into the "Rotate Object" work-mode. Choose the "World-Axes" in the "Axes of Rotation"-box and for the "Mouse Lock" choose the circular-button on the right side - it will restrict rotations to the axis that stands perpendicular on the viewport plane. If you now select a bone in the Right view, you need only to rotate it clockwise or anti clockwise to deform the skin accordingly into the desired position. Keyframe 2 - Frameposition 5 The first step will be to bring the figure out of the standing position to an intermediate stage on the way to the first footstep. Lift the left thigh by rotating it about 70° to the front. Then, the "lower_leg_left" has to be rotated the other way round to bring it back in an angled position. That's it. Go forward again 4 frames in animation time. Keyframe 3- Frameposition 9 In this frame stretch again the "lower_leg_left". Then, one after the other, select the thighs and rotate them so that they will form an open triangle. By doing this the figure will loose contact to the ground, but that's not important in the first place. If you animate a character, always concentrate on the posture, the alignment of the limbs, by rotating the bones in the correct positions. When you have done with that, grab the whole character by selecting its skin and move and rotate it back to its destination position. For our little man this means you have to move the character back to the position where the rear foot touches the ground in the same position it occupied also in the previous keyframe. Use the -buttons to jump back and forward between keyframe positions to be able to compare the positions of the character in the different keyframes. After adjusting the legs the arms have to be aligned, too. There is no proper motion dynamics if the arms hang motionless at the body. Try it for yourself. If you make a step forward with the right leg then the left arm moves forward, too. This means, when animating a walking- or running sequence you have always to counter the leg movements by the opposite arm movements. That's how it works - left leg forward, right arm behind and vice versa. Keyframe 4 - Frameposition 13 Move again 4 frames forward in time. The posture at this frameposition is almost identical with that of the second keyframe at frameposition 5, only this time the figure stands on its left leg with the right leg on its way forward. So rotate again all bones accordingly and move the figure forward until the foot position of the left leg in this key matches with the foot position of the previous keyframe. Go again 4 frames forward in time and complete the footstep similar to keyframe 3, this time with the right leg in front and the left arm behind. Start now a first preview animation. The character almost runs through a complete walking sequence. But we want our little man to make several more footsteps. Since we have animated already all intermediate postures lying between two footsteps, we can simply use now the copy and paste functions provided in the animation editor. The chapter "Absolute or Relative Copy of Position- and Rotate-Tracks" describes in detail the basic principles of copying absolute object positions and alignments in contrast to copying only the relative pattern of a movement. There is also an example provided how you can extend a walking sequence. You should read that chapter thoroughly when you want to understand the following operations. In the following tutorial I will only describe the individual working steps without explaining them in detail. Call up the animation editor now. In the first keyframe the character is standing in the starting position. The walking sequence is contained in the keyframes two to five. For a clean loop of the walking sequence we need for the next keyframe the same character posture as in keyframe 2, when the character started to walk. For this purpose we select keyframe two on frameposition five (illustration above on the left) and copy it (Absolute Mode) over to frameposition 21. Now leave the animation editor. Since we copied the key data as absolute positions and angles the man takes over the identical body posture as in keyframe 2 - but that also means that he stands again at the starting position of the animation at the right side of the screen. Therefore we have to move the figure again to the left to its new destination. Now a whole walking sequence is ready. Starting from keyframe 2 the character moves in a complete sequence from the starting position (not the standing position but the first in-between with the left leg lifted) two steps forward until he takes up again the same body posture as in keyframe 2 - just two steps further to the left. This is the sequence we can use now for a relative copy of the movement pattern. Copying this sequence in Relative Mode will make the character walk on independently so that we don't need to adjust the positions of the character any more. Call up again the animation editor. Select the frame range of the walking sequence. This is the first frame after the second keyframe, frame 6 and then with and a second mouse click frame 21. The second keyframe on frameposition 5 has to be omitted, since he contains the same posture as the last keyframe so if you would copy the sequence you would get two identical postures one after the other. It is also important to copy the leading empty frames 6 to 8 with the keyframe range, so that empty spaces are inserted correctly when pasting the sequence again at frameposition 22. We want to make the character 4 more steps with both legs and instead of pasting the sequence 4 times one after the other we make use of the Multi-Paste function of CyberMotion. Just enter a 4 for the Multi-Paste-parameter. Before copying the data to the buffer we also have to choose the Relative Mode for the Position track as well as for the Rotate track. Now you can click on the Copy button to copy the selected frame range to the temporary buffer. Then select the destination frame 22. Press the Paste button and the data will be inserted repeatedly at the destination point. Summary: Key 1 = Starting position, key 2 to 6 = complete walking sequence with key 2 holding the same posture than key 6. Copying key 3 to key 6 repeatedly behind key 6 multiplies the walking sequence. Since the posture in key 2 and key 6 are identical we can copy the movement pattern in Relative Mode - the character moves on independently. The completed tutorial - the character walking on and on... Another example of a somewhat more complex character already animated in a simple walking sequence is provided in the projects-folder under "..projects/character/man_walk.cmo". You can also find the pure animated skeleton (without skin) of that scene in the project folder "..projects/character/skeleton_walk.cmo". The model of the character was provided kindly by the artist Stefan Danecki. .topic 120 Everything about object management, groups and hierarchies, reference objects and how to work on individual facets and points. Select Objects Dialog Switching Objects On or Off Marking Objects for Processing Selecting a Reference Object Switching All Objects on or Off Managing Groups of Objects Copy Objects Delete Objects Changing Object Names Boolean Operation Selecting Objects, Facets or Points in the Viewport Marking Objects for Processing Reference Object Selection Selecting Individual Facets or Points for Editing Arranging Objects in Hierarchies Why using object hierarchies and how to arrange objects in a hierarchy The Popup Selection Click with the right mouse button in a viewport window to open the popup selection for fast access to often used functions .topic 14 In order to work on an object, the object must first be switched on and then marked for editing. Furthermore, it can be useful to combine separate objects and manipulate them as a group. You can do this - and many other things - in the Select Objects dialog. However, you can also mark objects directly with the mouse in the viewport windows. Other functions introduced here in the Select Objects dialog can be also accessed through the popup selection that opens, when you click with the right mouse button in a viewport window. See also: Selecting Objects, Facets or Points in the Viewport Viewport - Popup Selection - Menu "Objects - Select Objects" - Short Cut: + "O". When the Select Objects dialog is called up for the first time, you will note that there are already several objects displayed in the selection window. In it, in addition to the camera, are two lights and the background object. The preset illumination is the light-object "AMBIENT" (general area brightness) and the light-object "PARALLEL" (a parallel light-source). You can switch on or off the three standard objects "CAMERA", "BACKGRND" and "AMBIENT" but since there can only be only one of these basic objects you cannot delete or copy them. If a normal object or a light object is switched off it will not be included in picture generation, i.e. the lamp is not "on." Nor will a background be drawn when the picture is later generated if the background object is off. The camera is always activated as there can be no picture without a camera. However, you can also switch it off to prevent the camera-symbol being drawn in the viewport windows. The camera, background, all lights, NURBS- and analytical objects, and the skeleton bones are prefixed with a special icon, to distinguish them from the "normal" faceted objects. Switching Objects On or Off Just click on an object's name with the left mouse button to switch the object on or off. If you click on a parent object then also the complete subordinated branch in the hierarchy will be selected. You can switch on or off individual objects in a hierarchy branch by holding the -key pressed when clicking on the object's name. After leaving the Select Objects dialog, all objects that are switched on will be drawn, but in order to manipulate individual objects - i.e. position, scale or rotate them - the relevant objects must first be marked. Marking Objects for Editing To mark an object, click on the object's name in the Select Objects Dialog with the right mouse button. If you click on a parent object then the complete subordinated branch in the hierarchy will be marked, too, since child objects always follow their parent's movements. But you can still switch off individual objects in a marked hierarchy ( + left mouse button), for instance, to mark a single object in a hierarchy for deleting. To mark or unmark additional objects, click with the right mouse while holding down the button. The name of marked objects is shown in red letters on a black background. In the viewport windows the marked objects are emphasized by a different outline-color, which can be determined in the Work Colors Dialog. You can, however, also select objects for editing directly in the viewport window by clicking on them with the left mouse button. This is the simplest and quickest method in most cases. Only objects that are visible and can be directly manipulated in the viewport-windows (e.g. moved, scaled or rotated) can be marked for editing. The "AMBIENT" light (area brightness), and "BACKGRND" (background mode) objects, which cannot be manipulated in the viewports, can only be switched on or off here. Multiple Selection While holding the and the left mouse button pressed you can also drag a framework to enclose all objects you want to select. Thereafter a popup list opens and you can choose to switch on/off or mark/unmark this selection. You can also use the -shift key to select a range of objects. If you click on a name (left or right mouse button) while holding the shift-key pressed, then all objects lying between the last selected object and the currently selected object will be switched off/on or marked/unmarked, respectively. Switch On/Off or Mark/Unmark All Objects Simultaneously The four buttons next to the selection window enable you to simultaneously switch all objects on or off, or to (un)mark for editing all objects that have been previously switched on. The button switches all objects on. The button marks for editing all objects that are switched on. The button removes marks from all objects previously selected for editing. The button switches all objects off. Changing the Reference Object in a Marked Selection If you mark only a single object then this object will automatically be the reference object. This means, all coordinates and object dimensions or the position of the object axes printed in the tool window's parameter fields will reference to this object. If you mark a hierarchy branch then the reference object will always be the topmost marked parent object of that branch. If a group of several independent objects or hierarchies are marked simultaneously, then you have the choice to determine the reference object by clicking on the desired object or hierarchy while holding the - and -shift key pressed simultaneously. This key-combination can be applied in the viewport as well as in the Select Objects dialog. In the Select Objects dialog reference objects will be emphasized by an additional arrow following the name of the object. In the viewport, reference objects are highlighted through a different representation color (default = yellow). Manipulating Several Groups of Objects With help of the "Select Object Group" select box, you can create up to 8 different groups of selected and marked objects, and switch to and fro between them. The same selector box is found in the button strip directly over the viewport, so that you need not invoke the Select Objects dialog every time to change between 2 groups of objects during editing of different objects. Copy Object The button duplicates all marked objects in the selection. Each copied object is preceded by the indicator "-" to distinguish it from the original. Next to the button is a selector box, in which you can decide how the object is duplicated. Normal - The copy is identical to the original. Mirror - Vertical - The copied object is mirrored about the central Y-coordinates. Mirror - Horizontal - The copied object is mirrored about the central X-coordinates. As stated at the start of this section, the objects "CAMERA," light "AMBIENT" and the background-object "BACKGRND" cannot be copied or deleted. Delete Object The button deletes all marked objects. You can also delete marked objects outside of this dialog any time by pressing the Delete button on your keyboard. Deleted objects may be recovered by pressing the Undo button on the toolbar. As mentioned at the start of this chapter, the objects "CAMERA", "AMBIENT" and "BACKGRND" cannot be copied or deleted. Changing Object Names After double clicking on an object's name you can edit it. You can use up to 255 characters to define a name. The names of the objects "CAMERA," "AMBIENT" and "BACKGRND" cannot be changed. However, this does not apply any other light objects. Boolean Operation If you choose the button in the "Edit Object" tool-window, a popup selection opens with several entries for joining objects in different ways. Those entries are found also in the popup selection, which opens, when clicking with the right mouse button into a viewport window and once more in the "Edit Object" menu, thus enabling access to these functions in every work mode. Look at this chapter for a detailed description of Boolean Operations. .topic 19 Objects, facets or individual points can be selected simply by mouse clicking on them in the viewport windows. Hold the key pressed simultaneously for adding objects to the selection or the -shift key to remove objects again from the selection. Through the first three "Selection" buttons at the top of the toolbox you can choose between object, facet or point selection mode. The fourth button - the object axes selection - is only available in "Move Object" and "Rotate Object" work-mode and enables you to move or rotate the axes system of a selected reference object. Individual facets, points or the object axes can only be selected and worked on in Model Mode. If you change to Animation Mode then only object selection will be available. But this doesn't mean that you cannot animate on a per-point-basis. If you want to animate deformations of an object's surface you can make use of the skeletal deformation functions. Selecting Objects in the "Select Objects" Dialog You can also conveniently select objects using the Select Objects dialog, which provides many other additional functions, e.g., to switch objects on or off, to copy or delete them, to rename or to group them or to arrange them in hierarchies. Some of these functions can also be accessed from a popup selection by clicking with the right mouse button in a viewport window. Selecting Objects for Processing If you want to work on an object (e.g. move, scale or rotate it), you must first mark the object. You need only click on an object with the left mouse button in the viewport window to select individual objects. Marked objects are highlighted in a different grid color (which can be determined in the Work Colors dialog). If the object you require cannot be readily selected because several other objects overlap it, then a depth sorted popup select box appears in which you can choose the desired object. Selecting several objects - If you want to work on several objects simultaneously, then you press the button on the keyboard in addition to the mouse click with the left mouse button. While holding the and the left mouse button pressed you can also drag a framework to enclose all objects you want to mark. Deselecting objects - In the same manner, except this time pressing the -shift button instead of , allows objects that have been selected for processing to be deselected again. Changing the Reference Object in a Marked Selection If you select only a single object then this object will automatically be the reference object. This means, all coordinates and object dimensions or the position of the object axes printed in the tool window's parameter fields will reference to this object. If you mark a hierarchy branch then the reference object will always be the topmost marked parent object of that branch. If a group of several independent objects or hierarchies are marked simultaneously, then you have the choice to determine the reference object by clicking on the desired object or hierarchy while holding the - and -shift key pressed simultaneously. This key-combination can be applied in the viewport as well as in the Select Objects dialog. In the Select Objects dialog reference objects will be emphasized by an additional arrow following the name of the object. In the viewport, reference objects are highlighted through a different representation color (default = yellow). Selecting Individual Facets or Points for Editing You can also mark the individual facets and points of an object or a group of objects for editing in the work modes (Move Objects, Scale Objects, Rotate Objects, Work on Objects and Edit Skin and Bones). First select the objects that you want to work on as previously described. This is done in object selection mode. Then change via the selection buttons in facet- or point work mode. Now you can select individual facets or points in exactly the same manner as before with objects. So just click an individual facet/point in the viewport to mark it. By simultaneously holding down the button, you can select additional facets/points. Holding down the button instead results in unmarking points. You can also drag a framework with the mouse while pressing the or button to enclose several facets/points simultaneously for (de)selecting. Overlapping facets and points - let the selection float around Another function makes it easier to (de)select overlapping facets and points. Holding down the mouse wheel button and turning simultaneously the wheel will run the last facet or point selection up or down the object (alternatively you can use the <+>- and <-> keys on your keyboard). This way you can (de)select a point at the front of an object that overlaps the actual point you wanted to (de)select and then, pressing the mouse wheel button and turning the wheel, the point selection moves to the desired point. Example of positioning individual points in Move Objects work mode: Suppose we have selected an object for editing. The dotted line framework appears about the marked object and, if you move the framework the entire object moves to the position of the framework. Choose the button to change to point selection mode. Press the left button of the mouse while in the viewport. Hold the mouse button pressed and drag a framework that encloses only those points on the object that you want to move independent of the rest of the object. Releasing the mouse button causes a re-draw, after which all the selected points of the object are marked. Furthermore, the movement framework reappears, now exactly enclosing the marked points. If you now move this framework you will see that only the marked points move. Repositioning individual points changes the object's shape (in contrast to moving the entire object - by which the object as such remains unchanged). You could also simultaneously reposition individual points of several objects, if you had previously marked them and have produced a framework around the relevant points. In the same way, you can also scale and rotate individual points. In Edit Object work mode you will find other tools and functions, which you can apply to selected individual points, in order to delete them or reshape objects. .topic 280 Object hierarchies, in which objects are arranged in a tree like structure, help to manage groups of objects belonging together by putting them together under a single parent. The relationship between the objects in a hierarchy tree is also an essential help for building complex animation movements, where parents take along their children with their movements. Grouping of Objects in Hierarchies Using object hierarchies you can easily group your objects, e.g., under a special Group Object. This provides a clearer arrangement of objects in the object selection window (child nodes can be hidden) and you can select or de-select a whole group of objects just by clicking on the parent object of that group. For instance, if you have created a car consisting out of several hundreds of objects, simply link them under a Group Object "car". Then, every time you want to move that car, you only need to mark the "car" Group Object and move it to its new destination, all children will follow automatically. Hierarchical Animation - Child Objects Inherit Movements from their Parents Hierarchical structures are essential for animating complex movements. Take, for example, an industrial-robot that is assembled out of several different rotatable arms and joints. It is almost impossible to animate this robot if you have to move all the robot's parts to their respective desired positions individually, for every movement of the robot. You could certainly mark all necessary parts in a group and move or rotate them to their final position, but the following problem cannot be solved in this way: If, in an animation, you move a non hierarchical robot arm that is built from several different parts, from one position to another, then every single part follows its own animation-path. Simple example: The arm pictured above should rotate clockwise around itself through 90 degrees from the start position shown in the left picture. The cylinder is used as the center of rotation. The final position is shown in the right illustration. When playing now the animation you can see, that all parts of the robot arm move on straight movement paths from the starting position to the end position, instead of circling around together with the cylinder rotation. In the center picture you see an intermediate position and an undesirable result. Moving on their own paths in the direction of the target position, the objects no longer form a single unit, but instead overlap each other or drift apart. This type of problem is easily overcome, however, with help of object hierarchies. Individual objects are arranged hierarchically under other objects and then their movements are dependent on the parent objects. The precise concept looks as follows: Each child object performs all movements of the parent object immediately superior to it, as if it were an integral component of this object. Child objects nevertheless retain their freedom of movement, and therefore can still execute additional movements independent of their parent object. In the simple example pictured above it results in the following: The arm and hand, which are subordinate to the cylinder, correctly rotate with the cylinder. However, you could, for example, additionally have the hand execute a rotation about its vertical axis - without it influencing the movement of the parent arm or cylinder. Use the Select Objects Dialog to Arrange Objects in a Hierarchy In the Select Objects dialog the object hierarchy appears as a tree-structure in which child objects are always linked to their parent object. Each child object can have only one parent object. However, an object can have any number of child objects. This works in exactly the same manner as the tree structure of a file manager with its folders and files. For clarity, you are also able to open or close object nodes via the <+> and <-> buttons to hide or show child objects in the tree structure. A double click on the <+> button opens all child nodes of that parent at the same time. If an object is hierarchically subordinate to another object, then it performs every action of the parent object. If you mark a parent object for editing then the entire object branch with all its children is also automatically marked. Even if a child object is switched off temporarily in the Select Object dialog it will still be a part of the hierarchy structure and therefore, internally, follows all movements of its parent object. During preparation of an animation you can make use of this feature to accelerate the work flow and to save some time on depiction. Then you could switch on only those objects in the hierarchy that are located prior to the branch and then move, rotate or scale them in the individual keyframes. If you then switch on all subordinate objects again in the Select Objects dialog and start an animation preview, you will see that the objects previously switched off actually all take part in the movements of the root object. Object hierarchy in combination with camera- and light objects Even positional light objects and the camera can be part of a hierarchy. For example, a camera that is hierarchically subordinate to an aircraft automatically follows all the movements made by the pilot. This also applies to light sources such as headlights, which are hierarchically subordinate to vehicles. Objects that cannot be moved, such as parallel and ambient lights, as well as the background object, cannot be made part of a hierarchy. Build Object Hierarchy => Drag and Drop object name Arranging objects in a hierarchy is as simple as moving files in an ordinary file manager from one folder to another. If you want to subordinate an object or a whole object branch under another, select the relevant object or branch with the left mouse button and hold it down while dragging the object name to the target object. The 'Link' symbol under the mouse pointer indicates if a valid target object is found and you can release the mouse button. Thereupon the selected object or branch becomes hierarchically subordinate to the target object and the new hierarchy tree is drawn. In the example depicted above you can see how the 'arm_left'-object is dragged and dropped onto the 'body'-object. On the right side you can see the result with the subordinated 'arm-left'-object. In the example above a whole object branch ('hand_right' with subordinated 'finger_right' objects) is linked to the 'arm_right' object, simply by dragging the 'hand_right' object onto the 'arm_right'-object. Dismantling object hierarchies The procedure for dismantling an object branch from a hierarchy is exactly the same as previously described in arranging objects under each other, except that, instead of a target object, you drag objects to any empty field in the selection-dialog. The 'Unlink' symbol below the mouse pointer indicates a valid place to drop the object branch. Thereupon the object branch is removed from the hierarchy and placed at the end of the object list once more. Sorting object branches in the tree-hierarchy You can move root objects up and down to change the order in which the objects are listed in the selection window. No change of the hierarchical tree structure is intended - it's just a sorting operation to clarify the arrangement. Again all you have to do is simply drag a root object branch and drop it on a position between two other root objects. All valid places are indicated by the 'Insert' symbol below the mouse pointer. Hierarchical Independent Animation What happens if you subordinate previously animated objects under a hierarchy, that may be also has been animated already? Read the corresponding chapter on animation - Hierarchical Independent Animation. .topic 290 If you click with the right mouse button while in a viewport, then a popup selection appears that gives among other things some additional functions for object selection. Select Objects - invokes the Select Object dialog. Center Viewport - moves the viewport over a selection of marked objects, facets or points. Center Objects - a selection of objects, facets or points is moved to the center of the viewport. Copy - all marked objects are copied. Delete - all marked objects are deleted. Deselect - the current selection of marked objects, facets or points is canceled. Hide - all marked objects are hidden but not deleted. To show these objects again you must switch them on in the Select Objects dialog. Show All - shows all hidden objects, including camera, background and light objects. NURBS - opens a list of additional functions to modify a marked NURBS object. Boolean Operation - joins 2 marked objects applying a Boolean Operation. Properties - Apart from some basic object information the properties dialog contains all necessary parameters for the adjustment of the Degrees Of Freedom for the Inverse- and Forward Kinematics. .topic 150 All you need to model your own 3D-worlds Primitives Create basic shapes at the touch of a button Analytical Primitives Mathematical defined basic shapes Extrude Editor Cutting out objects as with a woodworking saw Spiral Extrude Objects Tube Objects Sweep Editor Create lathe objects Circular-, Ellipsoid- or Torus Templates Sweep Objects with Wavy Surfaces Spiral Object NURBS - B-Spline Patches Organic shapes build from deformable patches and cylinders 3D Text Objects 3D text for logos or film titles Landscapes and Planets Create your own worlds Plane Object The Fundament for most projects Functions Editor Visualize mathematical functions Group Object Group objects help to manage object groups or can be used to serve as reference points in animation routines Bones - See "Edit Skin and Bones - Create a Skeleton and Allocate Skin Points" .topic 79 Choose the object menu's "primitives" function or click the -button to see a choice of primitives for facet-based objects. All of these could of course be constructed using the extrude/rotate editors, but it is more convenient to make them by pressing a button. Unlike objects produced by the analytical primitives function, these are based on triangular polygons. This means that they can be more readily modified using functions like scaling, deforming, blending with other objects, deleting facets etc. Once you selected the object type, a small menu allows you to set up the parameters. Apart from the dimensions, you can also set the objects resolution, i.e. the number of points and facets of the object to be generated. Note that higher resolution mean slower rendering, up to the point where you can't even handle the object because it takes up too much of your computer's processing power to calculate all the redraws. As a rule of thumb, you can get away with a low resolution when the object is small relative to the size of the picture, i.e. when it is either tiny or very far away from the camera. Also, interpolation works much faster than high object resolution, so you might use this function with an object consisting of a minimal number of facets to reduce the problem. Practice makes perfect here... Block - enter the dimensions to generate a block. Sphere - enter the radius and resolution to generate a sphere. Ellipsoid - enter the radius for x, y and z. Closed cylinder - define a cylinder by its radius and height. Open cylinder - two radii and the height are needed, where the difference of the radii defines the wall thickness of the cylinder. The smaller radius entered is always the inner radius, while the larger value is used for the outer radius. If both are the same, the resulting cylinder will be hollow but without having a wall thickness. Hyperboloid - a cylinder with a tapered "waistline". Enter two radii here, the smaller of which defines the "waist" thickness. Cone - defined by base radius and height. Truncated cone - like cone, but a second radius for the top surface is needed. Torus - defined by its inner and outer radius. .topic 35 Selecting the "Analytical Primitives >" entry under "Objects" in the-menu bar or in the button strip calls up the selection of analytical primitive objects. Analytically described objects are different from the objects that are designed with the other editors. Until now, all the objects that you have so far designed have been constructed from one basic-component - the triangular polygon. With a large enough number of triangles this enables practically any form to be approximated and constructed. The disadvantage of this method is obvious, however: a high management and calculation expenditure in depicting the objects. If, for example, you want to represent a sphere you need a large number of triangular polygons to approximate the appearance of the sphere. Of course, there are procedures to calculate the shading of the surface to make it appear as if it is really rounded (and these have been built into the program), but nevertheless there remains the flaw of the angular outline and the high calculation expenditure on depiction. In this sub menu you have the opportunity to create some other basic shapes in addition to the triangle. You can, for example, create a sphere as an object that is constructed as a basic object with a center and radius, instead of polygons. When you represent this object later in raytracing-mode, the sphere can be calculated in a very short time, since just the basic object has to be calculated for intersection by the viewing ray. A sphere that could be approximated out of 1000 polygons also requires at least 1000 times as long to calculate the picture. (This is not entirely correct, as, through optimization procedures, not all polygons are checked for intersection with the viewing ray.) This is only valid, however, for the time-consuming high quality raytracing-mode that creates photo-realistic pictures - with real reflections, shadows and transparency. All the other depiction modes implemented in this program (which are faster by far than raytracing) are based on depicting polygons. To draw the analytically formulated objects in these other depiction modes, an additional polygon-based version of the object is generated. In this manner you can then work with these objects, as with the other objects that are based on triangular polygons. However, the number of the facets out of which an analytic object is approximated remains low. Furthermore, it is of consideration that the polygonal depiction of an analytic object only acts as an illustration of the basic object. These objects can be rotated and moved in the same manner as all other objects. When scaling, however, they are subject to certain restrictions. This is made clear by a sphere, for example, which if scaled along the X axis is no longer a sphere, but becomes an ellipsoid that can no longer be described simply by its center and a radius. Therefore, analytical objects will only be scaled, if this operation does not conflict with the mathematical description of the object's shape. All objects can be increased or reduced symmetrically, of course. Furthermore, cylinder objects can be scaled symmetrically along their base area as well as along their longitudinal y-axis. By choosing the corresponding menu entry, a dialog appears and you can construct the following basic objects: Sphere: For the definition of the sphere, only the desired radius is input. Circular disk: The statement of the radius is also sufficient for the circular disk. Closed cylinder: You can edit the height of the cylinder as well as the radius of the base. Open cylinder: In addition to the radius and cylinder height, a second radius is defined. The difference between the two radii results in the wall thickness of the cylindrical tube. The smaller value therefore always becomes the interior radius and the larger value the outside radius of the cylindrical tube. An open tube without wall thickness is generated if the interior radius is same as the outside radius. In the illustration you see the facet-based depiction in Hidden line work mode of the 4 basic object types available and, in comparison, how the same objects are represented by the raytracing procedure. To create an object, you operate the button. It should be mentioned in passing that analytical objects are marked by a different icon before their object name in the Select Objects dialog. .topic 22 - Menu "Objects - Extrude" The toolbox is replaced by the selection shown above. Simultaneously, the viewport windows are replaced by a window that now serves as an area on which to construct a template of your object. The menu bar and button-strips are also changed, restricted to only the functions needed here. The new functions in the menu bar and button bars: File -------------------------------- Save Template Load Template -------------------------------- Quit -------------------------------- Options -------------------------------- Spiral -------------------------------- Tubes -------------------------------- Back to Main Menu -------------------------------- Drawing a Pattern for an Object The extrude editor enables objects to be generated through the following mechanism: You have the facility, to set up a drawing with the mouse in the viewport, and then provide it with a depth. Cutouts can be made, as with a woodworking saw. If you go over to the viewport with the mouse, the mouse pointer is transformed into a crosshairs with which you can place individual points. These are then automatically connected with the help of rubber-band technique. It is important to take note of the following rules: At least 3 points must be drawn, which must not lie in a straight line. The lines must not cross (except on objects without end covers and ribbon objects). The picture shows an extruded "A". The outer polygon is always drawn first. By inserting additional polygons you can easily cut holes into the shape. Look at the template on the left side of the picture. The current polygon you are working on is highlighted (the yellow one) but you can always change between the individual outlines just by selecting a corresponding point on the desired polygon. The red line indicates the two points between which the next point to set will be connected. The red line will then be deleted and the two points will connect to newly inserted point. You can move the red line around the drawing with the arrow buttons located beneath the point coordinates box or just select any point you wish on the poly to get to the position, where you want to insert new points. You can change a point's position, by selecting it with the mouse and moving it around while holding the left mouse button pressed. It is also possible to input the x-, y-coordinates of the selected point directly with the keyboard. The Extrude Toolbox The position of the current point is stated at the top of the toolbox. Beneath the coordinates you find the arrow buttons to move the point selection around the drawing. Straight Lines or Curved Line Segments You can choose between straight-line segments ( -button) or curved line segments interpolated by B-Spline-Interpolation ( -button). If you work in B-Spline mode and at least 3 points are set, then these points define a curve approximated by a set of additional points. The picture above shows a circular template, consisting only out of 8 points drawn in B-Spline mode. 5 additional points were inserted between each line segment to approximate the curved line segments. The amount of additional points used to approximate the curve segment can be input in the edit field adjacent to the -button. You can change the number of points any time, just by selecting the corresponding line segment and entering a new curve resolution for that segment. You can even decide to change a curved line segment into a straight line and vice versa just by selecting the corresponding line segment and than activating the other line mode. The picture demonstrates the flexibility of the template management. The same outline consists of 8 points, only this time with different point resolutions for the curved segments on the right side and the line segments on the left side had been switched over from B-Spline mode to straight lines. Grid, Snap Function and Marker A grid screen can be switched on with the button to simplify orientation and serve as a pattern for the Snap function. The spacing between grid lines can be input in the edit field next to the button. The snap function is switched on with the button and ensures that points are positioned only at intersections of the grid lines. If the button is switched on, then each set point is marked. This enables superfluous points to be easily identified. They then can be deleted and the object simplified. With fewer points it follows that calculation time will be shorter when representing the objects. Insert Bitmap Activate the option to place a bitmap in the extrude editor's work space, if you want to construct an object by tracing its outline. Click the button of the file select box to load the bitmap you intend to trace. The screenshot shows how you could create an object of the Euro currency symbol by tracing a bitmap. This bitmap tracing function is very useful for creating objects from company logos. Erase Points You can use the function to remove the currently selected point. Delete Template The whole of the current drawing is deleted with the function . Cut Out Holes from the Template If you have already drawn the outline of your object's template, you can insert additional hole templates within the outline. To make clear that you want to begin with an inner hole template, simply press the button once. You can then begin to draw the hole detail within the template. A started hole template can be completely deleted using the -function. Cover-Faces of Extruded Objects Extruded objects are usually complete with cover faces at the front and rear. You can, however, also generate objects without cover faces by simply switching these options off with the "Cover-Front" and "Cover-Back" buttons. As already mentioned, template lines are not permitted to cross if an object is generated with cover faces. However, if no cover faces are to be generated the lines of the drawing can cross as required - as is demonstrated in the foregoing illustration. Bevel With covered objects you can choose to generate the cover faces with beveled edges. The illustration shows an object generated out of an "S"-shaped template and with cover faces. Next to it you can see the same object generated with switched on for the front cover faces. The distance for the beveling can be defined in the parameter box at the bottom of the tool window. You should choose the value sufficiently small that the template lines do not intersect due to inward directed beveling. Ribbon Objects If you activate the button, the final line closing the rubber band is deleted. You no longer obtain an object with cover faces. Instead you create a line path that is the width of the ribbon. As long as no hole templates have been generated you can still switch between the two options - ribbon or normal objects - for extruded objects. Extrude - Segment The number of segments or layers to the object's depth can be entered with "Segments". With a segment number of zero you create a flat object without depth. Segment Depth With "Depth" you can enter the depth of each individual segment. The overall depth of the object, therefore, is segment number * depth. If is switched on you must add one additional segment per side with the defined bevel distance. Extrude -Taper The "Taper" value allows you to obtain a reduction of the object with increasing depth. If, for example, a square-shaped template measuring 100 is drawn and extruded, the object, without scaling, also produces a square shaped object. The same template, when extruded with a scaling-value of 0.01, results in a tapered, pyramid-shaped object. Extrude -Twist Template With the "Twist" parameter you can enter an angle between 0 and 90 degrees, through which the template is then distorted segment by segment. For example, with 10 segments and a twist-value of 9 degrees, the object is twisted around itself by 90 degrees. In the illustration you see an example of an object that has been extruded with a twist value of 5 degrees and a scaling of 0.01, simultaneously. A simple square template with only 4 points served as a starting template. Create Extruded Object To create an object out of the template you have drawn, simply operate the "Extrude Object" button. A dialog box then appears in which you can give a name to the object. .topic 27 Spiral forms can be generated by drawing a template in the extrude editor and then by choosing the "Extrude Spiral Object" entry in the menu bar. The dialog box that appears enables you to generate a spiral object whose cross section has the form of the template you have drawn. The "Radius-" parameters input the distance of the midpoint of the template from the center of the spiral. Different X and Y values will produce an elliptically formed spiral. The "Segments per Turn" determines how many segments make up a single turn of the spiral. The next parameter defines the Number of Turns. The "Pitch" sets the distance between the center points of each turn. If the pitch is less than the object-height this causes overlapping of individual turns. Experimenting with the various parameters can lead to useful effects. For example, a screw-like form can be easily generated by drawing an isosceles triangle with the apex to the right side and choosing the pitch so that individual turns touch or overlap. The "Create" button causes the object to be generated. With "Cancel" you leave the dialog box. Scaling and Distorting Spirals The normal extruded object and the extruded spirals can be scaled and twisted. The relevant parameters in the extrude toolbox must, however, be planned before the call to the spiral dialog. The illustration shows a twisted spiral. The initial template was a simple square pattern of only 4 points .topic 28 The extrude editor also enables tubular objects to be created. Simply draw any line-path (lines are also allowed to cross) and then select the "Tube Object" entry in the menu bar. In the "Tube Object" editor you can then input the radius and the number of segments of the tube walls. If you have drawn a closed path for the line then a closed tubular object is created along the line path when you operate the Object- button. If you have selected the button while drawing the line path, the final connecting line is deleted. If you then call on the tube editor, a tubular object is created in which the ends are not connected together. In this event, you can decide if you require cover faces for the tube ends by selecting or in the tube editor. In the illustration you can see examples of tubular objects with interconnected and open end-segments. In addition you can see a tubular object with a taper, which is created by defining a taper factor in extrude toolbox. .topic 29 - Menu "Objects - Sweep The contents of the toolbox and the menu bar in the sweep editor differ in only slightly from those in the extrude editor. The functions in the new menu bar and button bar: File -------------------------------- Save Template Load Template -------------------------------- Quit -------------------------------- Options -------------------------------- Circular-, Ellipsoid- or Torus Template -------------------------------- Spiral object -------------------------------- Waves -------------------------------- Back to Main menu -------------------------------- Drawing theTemplate for Sweep Objects This time the figure is constructed around a vertical line - which represents the sweep axis - and is drawn only on the right side of the indication surface. Each point that is set appears mirrored in the left half of the surface - giving a clearer idea of the emerging object. Sweeping the points about the vertical axis in a number of steps creates the sweep object. If, for example, you draw a semicircle on the sweep axis this, through reflection, produces a picture of a circle on the indication-surface. Sweeping this about the vertical axis would create a sphere. An example of a glass-shaped sweep template and the object generated from it can be seen in the illustration. The Sweep Toolbox The content of the toolbox corresponds more or less to the content of the extrude editor toolbox. The "Point Coordinates" box is again found at the immediate top of the toolbox, and beneath it the options for linear or curved line segments, the grid-, snap-, marker- and bitmap functions. removes the currently selected point and deletes the entire template. Cut Out Holes in the Template In the same way as in the extrude editor you can insert additional hole-templates within the outline. To make clear that you want to begin with an inner hole-template, simply press the button once. You can then begin to draw the details of the hole within the template. A started hole template can be completely deleted using the -function. Note: Holes in the generated object are only visible later if the object is swept through less than 360 degrees (a segment object). Cover Faces of Sweep Objects Standard sweep objects are created without end cover faces (cover faces are not visible anyway with full rotation of the template through 360 degrees). However, for segment objects generated with an angle of less than 360 degrees you can include the respective cover faces by selecting the and buttons. Ribbon Objects If you activate the button, the last line completing the rubber band is deleted. You no longer obtain a closed sweep object - instead a band is swept about the central axis. If no hole templates have been drawn, you can still switch between the band object option and the normal sweep object at any time. Rotate Segment The number of steps in which an object is turned about the sweep axis is entered via the "Segments" parameter. With a complete sweep about the axis and a segment number set at 3, a rectangular template generates a triangular prism. If you generate an object to the same template with a segment-number of 12, it forms a reasonable approximation of a cylinder. Sweep Objects - Segment Angle If you enter "Angle" parameters at a value less than 360 degrees the rotation is restricted to the stated angle and a segment object is produced. The illustration shows an example of a circle template with a hole and the resulting object generated with cover-faces and a rotation-angle of 180 degrees. Create Sweep Object To create an object out of the template you have drawn, simply operate the "Sweep Object" button. A dialog box then appears in which you can give a name to the object. .topic 31 If you select the "Circular-, Ellipsoid,- or Torus Template" entry in the sweep editor's menu bar, a dialog box appears that automatically generates templates for the sweep editor. Generating Ellipsoid and Sphere Representations If you want to create a sphere or an ellipsoid and the button is operated, an object is not immediately generated - only the template. This gives the facility to modify the template to generate almost any sphere or ellipsoid segment or disk segments simply by erasing some points or by inputting an angle less than 360 degrees. All that you have to edit is the radius of the X and Y-axis and the number of points that will be used for the template drawing. Generating a Torus Template If you enter an additional X-Offset for the template, the circular shape will be shifted along the x-axis, thus resulting in an torus object when swept around the vertical rotation axis in the sweep editor. Example: torus template with radius x = y = 25, 16 points, X-Offset = 100 .topic 32 When you choose the "Waves" entry in the menubar of the sweep editor a dialog box appears with which sweep objects with undulating surface can be generated. Call this function after a template has been drawn in the sweep editor. With the help of the wave function, an object produced by the sweep-movement is overlaid with sinusoidal oscillations giving the object is given an undulating surface. However, a large number of points and segments are required to achieve this effect. The overlay follows the horizontal plane only. Wave Parameters: "Distortion Radius" is the operating radius of the overlying sine wave. The number of segments per oscillation is input with "Wave Segments". To generate a symmetrical surface, the number of the rotation steps (segments) must be divisible by the number of segments per oscillation. If you want to generate an undulating sphere out of 30 segments and enter a segment-number of 5, then 30/5 = 6 surface waves are produced. Such an object is shown in the illustration. The wave formation need not cover the entire object: The "Startpoint" determines the starting point in the template from which the overlay should begin. Similarly, "Endpoint" marks the point in the template up to which the wave formation is overlaid. In this way you can generate, for example, a glass object with an undulating stem and smooth cup and base. You need only specify as starting-point the point at which the stem begins and as endpoint the final point on the stem before the cup starts in your template drawing. The type of oscillation superimposed, which can be selected with the buttons <1> - <3> in the "Distortion" box, are: 1. Only the upper oscillation of the sine wave is used. 2. Only the lower oscillation is used, so that surface waves curve inwards. 3. The complete sine wave is overlaid. .topic 100 Select the "Spiral Object" entry in the sweep editor's menu bar to open a dialog that specifies a spiral with rounded walls in either circular or elliptical form. The X and Y parameters again relate to the radii of the spiral. The "Segments per turn" is responsible this time for the number of the subdivisions per turn. With "Tube Segments" you determine out of how many rotation steps the tube cross section is generated from. E.g. a value here of 3 results in a triangle and a value of 8 approximates a curved wall. "Number of Turns" is obvious, and "Pitch" again determines the spacing of the centers between individual turns. "Tube Radius" determines the cross sectional radius of the spiral's tube. The spiral is immediately calculated on operating the button. Example: radius x = y = 100, 30 segments per turn, 6 tube segments, no of turns = 3, pitch = 50 and tube radius = 8 .topic 91 The "NURBS Object" dialog is reached by choosing the "NURBS..." entry under "Objects" in the menu bar or in the button strip. NURBS stands for "Non-Uniform Rational B-Spline", a special type of deformable 3D patch. A surface is created based on a rectangular grid, similar to that of the 3D function generator. But this time the individual points of the grid represent control points that form a surface of much higher resolution. By manipulating these control points you can very easily model smooth and organic shapes. Above you can see a NURBS patch defined by 4 by 4 control points (painted in orange) but the actual resolution of the grid is much higher. You can adjust the initial resolution for a NURBS object through the resolution slider. The points lying between the control points are interpolated using B-Spline-Interpolation and are recalculated each time you move a control point. Control points can be selected in point selection mode and then be manipulated like any ordinary point in common facet based objects. Thus you can select, move, scale, rotate or deform control points as required to shape the form of the patch. The picture shows a patch with 4 center control points selected. In top view these 4 points were moved to the front. You can clearly make out how the individual control points are interpolated through a curved surface. In CyberMotion control points are an integral part of the surface and therefore influence directly the shape of the grid. Therefore it is not necessary, as you may have known it from other programs, to adjust additional weight parameters for each control point. The Dialog Parameters: Control Points You can define the initial number of control points in x- and y-direction for the NURBS patch. Later you will see how to add new rows of control points to existing NURBS objects. Resolution The resolution parameter defines the initial point resolution of the surface. This can be changed at any time later in the working process. With NURBS you can model quite complex objects, e.g. faces. The representation of high-resolution surfaces can considerably slow down representation on the screen therefore a lower resolution is usually choosen for the modeling process and then, at final rendering stage, a higher resolution for high quality output. In the illustration on the left side you can see a surface with a resolution of 0 (only control points, no additional points are generated for the surface). Next to it the same object with progressively higher levels of resolution. Surface or Cylinder In addition to the initial flat NURBS patches you can choose to generate a cylindrical NURBS object. In that case, the "control points x:" parameter defines the number of control points forming the circular shape. The illustration above shows an example of a cylindrical NURBS object and next to it, with a few scaling and moving operations, the heart-shaped object that was formed from it. Functions to Modify NURBS Objects NURBS objects can be subsequently modified, e.g. by adding or deleting extra rows of control points or by changing the resolution. To modify a NURBS object first select it as a reference object. Then open the popup selection by clicking with the right mouse in a viewport window. Under the NURBS-entry you find the following functions: NURBS - Add Row NURBS patches can be extended by adding extra rows of control points. All 4 sides of a patch can be enlarged this way. For cylindrical NURBS you can add to the top and to the bottom. Above you see an example of a NURBS cylinder after adding one extra row of control points to the top of the cylinder. NURBS - Delete Row This function deletes superfluous rows of control points. NURBS - Change Resolution To change the resolution of a NURBS patch or cylinder, choose a value between 0 (only control points are visible) and the maximum resolution of 15. NURBS - Convert NURBS to Facet-Based Object Similar to analytical described objects, NURBS are restricted in some ways. The points lying between the control points are recalculated after each manipulation and cannot be selected specifically to work on them. Many functions, e.g. deleting facets and points, Boolean Operations or detaching parts of an object can't be applied to a NURBS object. Converting a NURBS object to a simple facet based object will make all this functions available again. But on the other hand no return to the NURBS object with its special modeling abilities by manipulating control points and resolution will be possible. .topic 58 - Menu "Objects - 3D-Text" In the "Text Object" dialog, any TrueType font can be selected for generating 3D text objects. Also Beveling can be applied. Simply select the font you like and choose the output quality, input the text and create the object with the -button. Simple example of a text object, see project folder "/projects/font.cmo" Font - Activate the -button to display the system's font dialog, from which you can select any TrueType font installed on your computer. Bevel - The text objects can be generated with beveled edges. You can select this option separately for the front and rear of the object. If e.g. the rear of the text will never be shown, you can save a considerable amount of points and facets, if you only switch on the front beveling. The depth of the beveling is controlled by the "Bevel"-Slider beneath the text preview. The bevel value in percent defines the depth of the beveling in relation to the total depth of the created text object. Resolution - You can restrict the amount of points needed for an object with the resolution slider. In the preview window you can control the output quality of the text object. Switch on the "Show Points" option to emphasize all points from which the object will be generated. That will make it easier to find a good compromise between good output quality and the smallest possible amount of points and facets. Text - Input the desired text for your 3D-text object in this edit field. .topic 370 No photo, no bitmap textures, only the pure beauty of mathematics - create your own worlds with CyberMotion 3D-Designer Landscape Editor Introduction and dialog layout Landscape Editor - Basic Parameters Define the dimension and structure of your terrain or planet Landscape Editor - Filter Add terraces, crater or dunes to the terrain Landscape Editor - Edit Height Map Paint your own ridges or the course of a river into the height map Related topics: Tutorial - Landscape Design Background Dialog - Atmosphere .topic 34 - Menu "Objects - Landscapes" If you select the "Landscapes" menu entry a dialog for the production of fractal landscapes appears. These landscape objects are based on a rectangular grid and height information calculated for the grid coordinates using a fractal algorithm (similar to the Functions editor). Above are two illustrations of such grid objects, rendered with special landscape textures and an atmosphere. You can download the demo files of this two animated examples from our internet object library. The first example shows a flight around the plateau, the second a flight above a group of islands. The landscape editor's visual library also contains the pattern for the landscape object shown on the far left. Should you wish to generate the same object you can use the "littleland" file from the library. Then change to the material editor and select the corresponding "littleland" material from the materials library. The landscape editor provides a preview window with a shaded plan view and many functions and filters (crater, terrace, etc.) for the basic generation and the editing of the height fields. By means of special painting tools you can directly "draw" in the preview window to elevate or lower the ground or to smooth eroded slopes, for instance. In planet mode the landscape net is wrapped around a sphere to create highly detailed meteors or planets. The landscape editor can generate nets of up to 2 million facets, but this has to be handled carefully. A minimum of 256MB RAM should be available when exceeding a million facets, otherwise the constant outsourcing of memory to the hard disk will virtually break down the whole process. However, there is no need at all to generate such large nets because the new landscape objects come with new multi-layered procedural textures to provide the necessary details. See also our tutorial Landscape design on how to plan a whole outdoor scene with landscape, atmosphere and water plane. The Landscape Editor The dialog is divided into 3 areas: On the left side you will find the tool area with all settings for the generation and editing of the landscape. It contains three sets of parameters that you can switch between with 3 tabs in the dialog's document header: - Basic - The basic parameters define the fractal pattern and the dimension of the object. You can add a clipping plane and decide whether you want to create a landscape or a planet object. - Filter - These filters are blended with the basic fractal pattern to add dunes, terraces, crater or crevices. - Edit - The Edit tab provides the painting tools to directly influence the height map in the preview window. The preview window is located in the middle of the dialog. It shows the height map of the landscape. The height of each individual point in the height map is indicated by a different color corresponding to a color range placed directly beneath the preview window. If the button is also switched on, then an additional light source illuminates the height map showing clearly the contours of the mountains. To edit a color range simply click on the color range button. The Color Range dialog will open where you can freely define your own colors or simply load pre-defined entries from a visual library. When you move the mouse over the height map the current height is indicated below the preview window. Incidentally, when creating a landscape the current color range will be selected automatically as a color range texture for the object. See also the chapters about landscape design and procedural textures to learn more about the special capabilities of landscape textures. The visual landscape library occupies the area on the right area of the dialog. You can easily load existing files or add new examples to the library. The files do not contain data on actual facets (that would be rather memory consuming), only the parameters and working steps leading to the current height map. See also: Visual Libraries. .topic 103 When you select the tab in the landscape editor the basic parameters for the definition of the fractal pattern, the dimensions and the clipping of the landscape object appear. You can also determine here if you want to create a landscape or a planet object. At the top of the dialog are two additional Undo/Redo-buttons, with which you can restore or repeat almost every operation in the course of the landscape creation. The parameters: Resolution - specifies the number of points used to form a row of the fractal grid. For instance, if Width and Depth of the grid object are identical and a resolution of 250 points is input, then the final grid will be made up of 250 * 250 = 62,500 points. If the Width and Depth parameters for the grid are different then for the shorter side of the grid a suitable point division is calculated from the resolution parameter. Caution: If your computer is not equipped with at least 128 MB of free RAM you should not top the limit of half a million facets for a landscape object, otherwise the constant outsourcing of memory to the hard disk will considerably restrict a fluent working process. With 256 MB and more even 1 million facets and more can be handled rather comfortable. Generally it is advisable to first create an object with a low point resolution and adjust camera positions, background, material settings and lighting. Then go back to the landscape editor and create a new landscape object with a higher point resolution to replace the low resolution object. Before deleting the old object transfer the material settings from it in the material dialog using the material function. Range - the height map used to form the grid is calculated from a fractal algorithm. The range parameter acts in this context like a zooming out effect from this fractal structure. The higher the value, the more extensive is the area, with an increasing number of hills and valleys becoming visible. Detail - the Detail parameter sets the number of iterations used by the fractal algorithm and with it the detail in the surface generation. Each of the iterations adds new detail the height map - the landscape becoming more jagged and bumpier with every step. With only a minimum value of 1 iteration the algorithm produces a very gently hilly terrain. Smooth Slope - While the topography in low areas often appears smooth and round, with increasing height the terrain appears more jagged and stony and therefore more detailed. With the Smooth Slope parameter you can enter a percentage of the general height of the object below which the lower parts of the object will be smoothed down. Fracture - A variation of the fractal algorithm results in a somewhat more fractured surface with sharp steps and rifts. Flat Edges - With this option switched on the edges of a landscape object will smoothly run down to ground level. This way the terrain can be merged seamlessly with a plane object (for instance as an island on a water plane or several smaller terrain objects on a grassy plain). The parameter determines the area over which the blending of the height is calculated. Random - The Random parameter enters different starting values for the fractal calculation and produces many variations in the production of the object. You can also switch off the random generator - this results in an absolutely flat plane object consisting out of thousands of points and facets. This makes no real sense but you can use this as a basis to study the outcome of parameter changes on the Filter page in the dialog, for instance, to make visible the overlaying of craters, dunes and crevices without being distracted by the fractal pattern. The same goes for the Edit page with the painting tools. Landscape Dimensions The Width, Depth and Peak parameters specify the dimensions of the generated object. Any rectangular area within the world limits of ±16,000 units can be defined. You can also enter a negative value for the Ground Level. With an additional plane object and the option activated (always aligned to a level of zero) parts of the object will lie below the plane. If this is intended, you should also switch on the Clipping-Height function to remove all superfluous points and facets below the plane level. You could also enter higher values for the Clipping-Height to generate a group of islands from the height field. Plateau-Height - If this option is activated then all height values above this level are cut off to the plateau height. While this option is suitable for creating simple plateaus the terrace functions on the filter page offer much more possibilities for creating terraced surfaces with plateaus on top of the terrain object. Landscape versus Planet At the bottom of the dialog you can decide if you want to generate a landscape object or a planet. For the latter case the fractal grid is simply wrapped around a sphere and the edges merged seamlessly together. In principle the word planet is a little bit overstated. If you look at pictures showing the earth seen from outer space than our planet appears almost totally round - you recognize heights and valleys more by their color than by their irregular shape against the background. Taking this into account a simple sphere with a good bitmap texture would be much more appropriate for creating a planet object than a high resolution fractal planet object consisting out of millions of points and facets. On the other hand with the planet option you can create wonderfully irregular shapes for moonlets or asteroids - especially in combination with the crater filter. Landscape - Add Plane If your project does not yet contain a plane object and the option "Landscape" is switched on, then automatically a plane object at ground level will be generated with the terrain object. Set Camera & Atmosphere A suitable camera position in front of the generated landscape will assigned automatically and the background switched on if this option is active when creating the landscape. Landscape - Pedestal The edges of the landscape object are vertical. The terrain, one might say, stands proud of a base. Planet with Water Surface If the Clipping-Height function is switched on then an analytical blue sphere will automatically be generated with the planet object. This sphere represents the water surface covering the parts of the planetary surface where facets were removed by the clipping function. In the object selection dialog the water sphere will be placed hierarchically subordinated to the parent planet object. Divide Landscape/Planet into Separate Objects On generation the object can be split into several separate objects. This will considerably speed up the rendering process when the high quality raytracing algorithm with shadows and reflections is used for the picture calculation. The more complex the object the greater the number of separate objects it should be split into in the selection box at the bottom of the dialog. For instance, rendering could be 10 times faster by splitting a landscape object consisting out of 1 million facets into 25 separate objects. This applies only when shadows, reflected or refracted rays are involved in the rendering (for instance, if the landscape casts shadows or reflects in a water plane). The separate parts of the object will be hierarchically subordinated to a parent object. The highest part of the landscape will always be the parent object. This makes it easier to select the whole landscape object for working because you just have to select the highest elevation in the terrain to mark the parent and with it all children. It is also important for the later texturing of the terrain. The terrain textures depend (among other things) on the total height of an object so it is the parent object that provides the total height dimension for the texturing. All subordinated objects reference the material of the parent object. .topic 104 Select the tab in the landscape editor to bring the page with the filter settings to the fore. The filter functions overlay the basic fractal landscape pattern to add further details like dunes, terraces, craters or crevices to the terrain. Example of a fractal grid object overlaid with a dunes filter. It consists just of 250*250 points (about 125,000 facets) and was rendered with a simple sand texture and an atmosphere. Filter Effects Peak - A conical mountain peak is blended into the height map. The Peak parameter controls the strength of the blending of the cone with the basic fractal pattern. Dunes - With the Dunes option you can overlay the fractal pattern with a sinusoidal wave, thus simulating slopes similar to dunes. The number of dune Rows is entered via the corresponding parameter and with Turbulence you add a turbulent distortion to the sinusoidal wave. Crater - Adds craters to the landscape. You can input the number of craters and the average size. Crevice - Adds crevices to the terrain. Again you can enter the number of crevices and their size. To see real smooth crevices a very high point resolution has to be set, so you've got to be cautions with this function. Terrace - Subdivides a landscape into a given number of terraces. The Slope Ratio controls the transition from one terrace to the next. A low value results in steep terraces with wide level areas while a higher value calculates smooth transitions between terraces with only small areas of level ground. .topic 105 Select the tab in the landscape editor to bring the page with the painting tools to the fore. With these painting tools you can directly "draw" in the preview window to modify the height map. Thus you can raise or lower the ground or smooth eroded slopes, for instance. Painting into the Height Map Each time you change one of the various parameters on the Basic or Filter side of the dialog it results in a thoroughly new fractal pattern calculated for the height map. This changes when you switch over to the Edit work mode. In the instance you draw into the height map the landscape will not be interpreted any more as a fractal function defined by various parameters but as a simple height map that you can paint on. If you want to adjust any of the Basic or Filter parameters after painting into the height map the fractal pattern will be recalculated from the start and all modifications painted on the height map will be lost. The "Edited!" label that appears at the top of the dialog when painting in a height map will remind you of this fact. However, it is merely for information (also when loading edited library files), since all modifications can be undone anyway by operating the UNDO button. Brushes and parameters: Brush Radius - For a round brush this is the radius of the area of influence under the brush (as a fraction of the width of the landscape). For a rectangular brush this value defines the width of the brush (again as a fraction of the width of the landscape). Strength - This parameter controls the strength of the effect, for example, how strong and how fast the surface is raised or lowered under the brush. Brush Filter - This filter specifies the distribution of the effect over the brush area. The illustration shows the four different brush filters, in each case applied with the Raise function on a flat plane (Random switched off on the Basic side of the landscape dialog). The top row was "painted" with a round brush and the row below with a rectangular brush. In general the round brush with the bell-shaped filter (on the top left in the illustration) shows the best results as it provides the smoothest transitions towards the edges. Brush Shape - round or rectangular. Minimum and Maximum Height - If you don't want to exceed a particular minimum or maximum height when applying the painting functions then switch on the corresponding option. At the start the Peak and Ground Level parameters from the Basic side of the dialog are entered automatically when switching over to the Edit side. But of course you can input any appropriate height within the world's limits of ±16,000 units. Brush Effects: Raise - The area beneath the brush is raised. Lower - The area beneath the brush will sink down. Fixed Height - You can specify a fixed height up to which the area under the brush will be raised or lowered. Average - This function levels out the area under the brush. The surface will be smoothed without removing to much detail. In this example the Average function was used to carve out steps in the slope by settings "points" with a rectangular brush. For further examples of use see the tutorial on landscape design. Smooth (Erosion) - Smoothes away the rough edges on a surface. .topic 56 - Menu "Objects - Plane" In the Background dialog you have the ability to easily select a sky model for the background. What may still be missing is a similar horizontal object. While there are a number of ways available to form a suitable surface, you can, instead, use the Plane-dialog to create a simple plane object. That is, by operating the "Plane" button you actually generate only a square facet. Now you may say that you could do exactly the same with the extrude editor as well. The plane object, however, occupies a special position in picture calculation and object manipulation: The plane stretches to infinity at the horizon of the picture, while the sizes of other objects are restricted to the 3D area dimensions and so cannot coincide with the sky at the horizon. The plane cannot be scaled (it always stretches to infinity). Nor can it be turned. Only the height of the plane can be moved - so that you can adapt the "ground level" to suit the position of the other objects. The height of the plane can be adjusted through the Move object menu - like a normal object. The plane can also be switched on or off with the Select object dialog and you can select or edit the material of the plane as with all other objects. You can even set up a multi-layered plane model, in which you generate several planes that lie above one another. If, for example, instead of the sky, you position a plane above the camera, by selecting suitable materials you can produce entirely new backgrounds in this way. However, you should remember that with parallel light sources and the shadow function switched on, the upper plane would shield the "sunlight". In the example in the illustration above, two planes with bitmap textures and a fog background are used. The light comes from a lamp located beneath the upper plane. With the "Plane"-Y parameter you can enter the height of the plane when it is created. With the "Lowest" button the height of the plane is set to the lowest height of all the objects. .topic 33 - Menu "Objects - 3D-Function" A dialog box appears that contains a function "pocket calculator" and a parameter-box. By using the Function-generator you can generate three-dimensional functions relating to the X and Z coordinates of the horizontal plane. The procedure is quite simple: It is to generate a rectangular grid in the X, Z plane, whose grid coordinates will form the parameters of the functions. The starting point of the grid, the number of the points in X and Z direction, and the spacing of the individual points can each be edited in the parameter fields on the right side of the dialog box. With help of the mouse you can input any function into the "pocket calculator" - which will not allow invalid input. (It will not close brackets, for example, if no brackets have previously been opened, missing brackets are automatically set, etc.). Press the button, and for each grid-point in the X, Z-plane the corresponding Y value is calculated from the function. These points are then assigned surface facets and the object is generated. The Functions of the "Pocket Calculator": You must use the mouse to input a function. Click on the required function as you would on a simple pocket calculator. Angular Operators: - calculate the sine of the argument. - calculate the cosine. - calculate the tangent. The arguments can be interpreted as radians or degrees - determine by the and the buttons. - calculate the squareroot. - natural logarithms to the base e. - ordinary logarithms to the base 10. - corresponds to the sign for negative '-'. - supplies the absolute value for the argument. - sign-function: Gives -1 for arguments less than 0. 0 if the argument is the same as 0 and 1 if the argument is greater than 0. - rounds down floating-point numbers to the next lowest integer number. - rounds up floating-point numbers to the next highest integer number. An opening bracket appears automatically behind each operator inserted. It does not allow function to be calculated based on invalid arguments, so at the X, Z coordinate the function value concerned is put at zero. It therefore generates a function object in every event. Invalid arguments are: - Negative root expressions. - Logarithms of expressions that are less than or the same as zero. - Tangent values near to 90° or 270°. Arithmetic Operators <+> - Addition. <-> - Subtraction. - Division. <*> - Multiplication. <^> - Power. Variables: - X grid coordinate. - Y grid coordinate. A function can be deleted with the help of the button. The button generates the object from the function. The button exits the dialog box The option resizes the created grid to the dimensions of the viewport window. The Grid Parameters: In the "Points" box you can input the number of points in the X and Z directions. At least 2 points must be entered for each direction. Consequently it forms a small band over each square of the grid formation to cover the entire rectangular shape. The maximum number of points that can be entered in each direction depends of the number of points available on starting and is automatically limited. The "Offset" box allows you to enter the spacing between individual points. For many function surfaces it is suggested that you use small offsets of about 1 or less to obtain a moderately fine structure. In this case, only a relatively small grid appears on the screen when you later view the object. To edit an object created by graphic function, you normally have to enlarge the in the Scale Object menu. With the "Start" box you can enter the start for the X and Z parameters. Automatically scaling is enforced should the function values extend beyond the limits of the represented area. Similarly, the function object appears centered at mid-point of the indication space, independent of the starting point and offset. Examples As an example, we want to generated the function 10*sin(sqr(x^2+ z^2 )). Points in X direction: 30 Points in Y direction: 30 We enter an Offset of 60 and select the button. To obtain a symmetrical form from the function we enter a starting value of -(30 points* 60 degs.)/ 2, i.e. -900. From the start point of (-900, -900), the function generator now produces a grid of 30* 30 = 900 points in the X, Z plane and records the Y value dependent on the function value for each point for the X, Z coordinates. The sine function, which can assume a maximum value of 1, is stretched in the Y direction by the factor 10 in front of the sine bracket. Switch on the option, as otherwise the created grid would be much too large to fit into the viewport windows and you would have to rescale it by hand. After creating the object, rotate it to using the Rotate Object menu to see the surface structure clearly. Rotate the object by about -45 degrees, first about the Y axis and then about the X axis. This produces a view of the object as seen in the illustration. Next example: Floor(x) Incredibly, with this very simple function you can easily create staircases. The parameters: Points x: 40 Points z: 4 (actually you only need 2) Start: 0 Offset: 0.25 With these parameters you create a small ribbon of stairs that will have to be scaled in z-direction to widen it. .topic 640 This is how a Group Object will look like in the viewport window. A Group Object is a simple primitive object that you can handle like all other objects, e.g., you can move, rotate or scale it. The only difference is, that Group Objects are always hidden in the final rendering - you can only see them in the viewport windows while working on your project. To create a Group Object just select the "Objects - Group Object" entry in the menu bar. There are two major uses for Group Objects: Use Group Objects to manage object groups Group Objects are simply used to group together a number of objects by linking them under the Group Object in a hierarchy. This provides clarity in the object selection window (child nodes can be hidden) and you can select or de-select a whole group of objects just by clicking on the parent object of that group. For instance, if you have created a car consisting out of several hundreds of objects, simply link them under a Group Object "car". Then, every time you want to move that car, you only need to mark the "car" Group Object and move it to its new destination, all children will follow automatically. Example: This box consists of 30 individual elements, all of which are subordinated to the green Group Object. To move all parts of the box at the same time you just need to select the parent Group Object and use it as a "grip" to carry all the elements of the box with it. Group Objects provide movable reference points in animations Group Objects can serve as reference points for rotating or scaling object groups in animations. If, e.g., you want to rotate a group of objects around a common midpoint you would link this group under a Group Object and just rotate the Group Object with its child objects. It's the same for camera rotations. If you want to circle the camera in an animation around another object or group of objects then just place a Group Object at the visual focus within that group and link the camera under this Group Object. Use the "Focus" camera function to center the Group Object in the camera view, then rotate the Group Object and the camera will rotate with it in a perfect circle and with the focus always centered to the object group. .topic 130 How to set up your scene by moving, scaling or rotating objects or working on individual facets and points. Move Viewport How to move the visible detail in your viewport windows Move Objects and Textures How to position objects and textures or let objects drop onto other objects Scale Objects and Textures Resizing objects and textures Rotate Objects and Textures How to rotate objects and textures about themselves or other objects Kinematics Use Inverse- or Forward Kinematics to move hierarchically structured joint models Edit Objects Working on objects: Facet Extrude - Dragging facets out of objects Add Points and New Facets - How to extend an object with new points and facets Delete Individual Points and Facets - How to delete selected points and facets Boolean Operation - Model objects by combining them, e.g., by using one object to cut a whole into the other object Detach Object - Cut off parts of an object Triangulate Facets - Increase facet resolution of an object Magnet - Deform an object using a magnetic field Smooth Object - Triangulate facets and smooth shape of an object Invert Normals - How to invert the direction of the surface normals Selective Facet Interpolation - In- or exclude individual facets from facet interpolation Edit Skin and Bones Character animation - How to create a skeleton for a skin object Animated Object Deformation How to bend, twist or inflate objects .topic 15 - Menu "Edit - Move Viewport" The viewport window can be freely moved in the three axes directions until you reach the preset of 3D area limits, which lie at 224 = 16.777.216 units. However, the window moves only to the edges of this area, so objects are always visible in front of the viewport window. You can move the window-detail in any work-mode simply by pressing the left and right mouse button simultaneously while moving the mouse. However, here in the "Move Viewport" work-menu it is sufficient to hold only the left mouse button pressed. Here you can also read the coordinates of the center of the window-detail or you can input it directly via the keyboard. The order of the X, Y and Z-axes in CyberMotion's left-handed axes system. Restrict Direction of Movement This three buttons permit movement of the window-detail. If the left button is active, the window-detail can be freely moved. If the button with the horizontal arrow is active, then the vertical movements of the mouse are ignored, so that it permits movement of the window in the horizontal direction only. The converse is true for the button with the vertical arrow. Line Up the Window Automatically on an Object or a Group You can save yourself a great deal of time spent searching to move the window over an object that is outside the visible area by using the
button. However, to use this function you must have previously marked the relevant objects, facets or points. If you then choose the
button the window is precisely centered over the marked selection. You can also call up this frequently used function in other work modes. A click with the right mouse button into a viewport window will display a popup selection with a list of functions including the "Center Viewport" function. .topic 18 - Menu "Edit - Move Object/Texture" Modeling- versus Animation Mode Differences between Modeling Mode and Animation Mode Depending on wether you work in Modeling- or Animation Mode the tool window presents a slightly different selection of tools. For instance, in Modeling Mode you can change an objects basic shape by editing individual facets or points of an object. In Animation Mode you can only work in object selection mode and editing of individual facets and points is no longer available. However, you can animate objects on a per point basis if you use the skeleton deformation functions (Skin and Bones). The "Texture"-pages, where you can adjust the position, size and alignment of procedural- and bitmap textures will be hidden in Animation Mode, too. You can move, scale or rotate textures only in Modeling Mode. Based on this initial alignment, textures will behave very flexible in animations. When scaling or deforming objects or skins in an animation all textures will be deformed properly with the object. Moving Objects, Facets, Individual Points or Object Axes with the Mouse You can move a marked selection simply by pressing and holding the left mouse-button and dragging the selection across the work-surface in the desired direction. Axes of Movement You can move a selection along the world axes or along the object axes belonging to each object: If movement along the world axes is selected, then objects are moved only on the 2D-viewport plane of the respective viewport. The movement of the objects can be restricted by using the three "Mouse Lock" buttons pictured in the tool-window. If the left button is active, the objects can be freely moved in the viewport window. If the button with the horizontal arrow is active, vertical movement of the mouse are ignored, so that it allows the objects to move only in the horizontal direction. The converse is true for the button with the vertical arrow. If the object axes are selected for the axes of movement, you can move a selection along the object axes of the marked reference object. You can restrict the movement again to one of the object axis by selecting the x-, y- or z-button. When you choose the button under the x-, y- or z-axis button, then you restrict the movement to the plane standing perpendicular on the respective axis. Example: The illustration shows a model of a closet with movable drawers in "Top" view . Since the closet does not stand right-angled on the viewport plane, you cannot open or close the drawer precisely when moving it along the world axes on the viewport plane. The drawer would slightly run off the tracks at the left and right side. However, in object axes mode we can conveniently move the drawer in its tracks along the drawers x-axis. Direct Input of Coordinates from the Keyboard The coordinates indicated in the toolbox always refer to a particular corner of the bounding box that surrounds the reference object of the marked selection. You can specify the current point indicated by selecting one of the nine buttons placed at the corners and in the center of the box image. In the viewport window the corresponding corner of the bounding box will also be marked with a similar blue pin-button. You can directly input the position for the bounding box (and thus the position of the object) via the keyboard. Each alteration to the coordinates leads immediately to a corresponding change in position and a re-draw of the scene. Snap Functions In the button-strip directly above the viewport windows on the right you find the different snap functions that will facilitate the positioning of the objects in the scene. If, e.g., you switch on the snap function for grid lines, then an object will be "caught" automatically by grid lines when near to them. You can switch on the snap function via the magnet button next to the snap selection box. The particular point of the object selection that is caught by the snap function is given by the selected bounding box corner of the marked reference object. You can choose to snap a selection to: Grid Lines (The viewport grid can be switched on or off via the corresponding grid button - the size of the grid can be edited in the parameter field next to it) Grid Points Object Axes - 2D and 3D Object Center - 2D and 3D Object Points - 2D and 3D Object Lines - 2D and 3D Snapping can be done only in the 2D-working plane or in 3D mode with additional depth testing enabled. Snapping can considerably simplify working on a project. Think of a wall you want to align next to another wall. If you activate the "Object - Lines" snapping function and the corresponding corner of the wall you want to fix to the other wall, then you just need to move the wall near to the other wall and it will jump automatically into place. Center Selection in Viewport The
function moves a selection of marked objects, facets or individual points to the exact center of the window. You can call up this frequently used function also in other work modes. A click with the right mouse button in a viewport window will display a popup selection with a list of functions including the "Center Selection" function. Drop Selection Use the option to drop objects to the ground. If an object hovers above another object it will land on it, if not, nothing will happen. You can drop selected objects or hierarchies as a whole object or let each object/hierarchy fall down on it's own path. Moving Objects using Inverse- or Forward Kinematics Beneath the 3 selection buttons for selecting objects, facets or individual points there are 2 additional buttons for the Inverse- and Forward Kinematics working modes. With help of Hierarchical Kinematic Modeling it becomes very easy to position a group of joints arranged in a hierarchy. More details in the corresponding chapter about "Inverse- and Forward Kinematics". Movement in Hierarchy (only in Modeling Mode) Usually hierarchical subordinated child objects follow all movements of their parent objects. However, in Modelling Mode you can switch off this automatism temporarily. If you activate the right button of the two buttons shown above then you can move a parent object alone without moving its children with it. But this is a movement on model basis. This means that the displacement will take place relatively to its children and the new distance between parent and children will be passed on to the complete animation. In Animation Mode this function is hidden - all movements of parent objects will be inherited to their children again. Effect on Following Path (only in Animation Mode) Every time you move an object in Animation Mode then automatically a new position key is created, or, if a key already exists on this frameposition, it is updated. Now, if you are moving backwards in the animation because you want to relocate an object at a particular frame position, then you can decide via the two buttons depicted above, how the repositioning of the object will effect the following movement path: Example: The illustration above shows a sphere moving through 3 key positions from left to right. We move back to the second keyframe where the sphere is located in the middle of the window. Now, if you move the object in "Absolute Position" mode (illustration on the left), the position of the object will be changed only for the current keyframe without influencing the following movement path - in the course of the animation the object returns back to the old movement path after this keyposition. However, if you select the second button "Relative Movement - Path follows Movement" then the object will be moved with its movement path (illustration on the right). Apart from correcting movement paths this function is very useful if you want to make copies of already animated objects. After copying an object you just need to move it a little bit to the side, in "Relative" mode including its animation path, and soon you have a whole pack of figures running along the same, slightly displaced, movement paths. Moving Procedural Textures or Bitmaps (only in Modeling Mode) On operating the "Move"- index tab, the content of the tool window changes and textures can be repositioned on a previously marked reference object. Now, instead of a bounding box enclosing the object, a grid is drawn that represents the texture or bitmap and its axis. Use the "Select Texture/Bitmap"-select box to decide whether you want to move the texture axis for a procedural texture or the texture axis for one of the bitmaps assigned to the object. Suppose, in the material dialog, you have chosen a bitmap for a selected object - then this bitmap will also show up in the "Select Texture/Bitmap"-select box and you can select it for work. A grid similar to that shown above appears with the dimension of the grid exactly matching the dimension of the bitmap. The size of the grid (and the respective bitmap) can easily be adapted to your needs in the "Scale Texture"-work mode. By moving the grid you also move the picture over the surface of the object. The direction of the projection can easily be verified by the direction of the z-axes, which is perpendicular to the bitmap grid. The location of the top or bottom of the bitmap can be determined from the y-axes. The x-mark on the x-axes shows the right side of the picture. The illustration above gives an example of a rectangle with a bitmap projected onto its center. After that the bitmap texture axis have been moved to the upper left corner of the object, which results in the bitmap projection being similarly displaced. Moving the origin of procedural textures or bitmaps is the same procedure; merely select the corresponding texture axis in the "Select Texture/Bitmap"-select box. Similar to moving objects, you have again the choice to move textures along the world axes or along the individual axes of the texture grid. The coordinates in the tool window indicate the center of the texture axis. The "Texture/Bitmap"
-button returns the texture axis back to the precise center of an object. .topic 20 - Menu "Edit Object/Texture" Modeling- versus Animation Mode Differences between Modeling Mode and Animation Mode See - "Moving Objects or Textures" Scaling Objects, Facets or a Point Selection With help of the new selections in the tool window you can change the size of a marked object selection. The program offers a variety of possibilities to do so. You can, e.g., simply enter the dimensions of the bounding box that surrounds the object selection. Using your mouse to drag the objects to the right size directly in the viewport window is another possibility. Finally, you can enter scaling factors, e.g., enter 2 for the y-axis to double the height of the object. Reference Point and Axes of Scaling The reference point from which scaling will take place depends on wether you are scaling along the world axes or along the object's own axes system: Scaling along the World Axes (only in Modeling Mode) Select the "Scale along World Axes" button in the tool-window in order to scale parallel to the X-, Y-, Z-world axes. Scaling with the mouse will take place only in the 2D-viewport plane or symmetrically in all three dimensions. With help of the "Mouse Lock" buttons you can restrict the scaling directions again. If the left button is active, objects can be scaled in the horizontal and vertical viewport plane. If you choose the second button, objects are scaled evenly in the 2D-viewport plane. If the third button is active, the vertical mouse-movements are ignored, so objects are only scaled in the horizontal direction. The converse is true for the fourth button. If the last button with the box is selected, objects are evenly enlarged or reduced in all three dimensions. Reference Point of Scaling in World Axes Mode If you scale along the world axes you have two options for the reference point from which scaling will take place: Crosshairs - If you select this button, then crosshairs appear in the viewport windows. You can freely move them with the mouse to specify the reference point of scaling. To "grab" the crosshairs simply click with the mouse into the area between the 4 arrows in the center of the crosshairs. After positioning the crosshairs - if you want to scale the marked selection again with the mouse - you have to leave the area between the 4 arrows of the crosshairs again. Scaling will take place from the object axes center of each marked object. In hierarchies the object axes of the top most parent of the marked selection will be used. Example: In the picture on the left you see the initial scene with 6 cylinders standing on a platform. You want to enlarge the cylinders evenly in the 2D-viewport plane, without changing the length of the cylinders. You have chosen the crosshairs as reference point of scaling and moved them to the center of the marked cylinder group. The picture in the middle shows the result after the scaling operation. The cylinders have the correct size now but the scaling operation also drifted them away from the reference point, so you have to reposition them again on the platform. The third picture shows the result after scaling along the object axes of each marked object. Since all objects were scaled from the origin of their own object axes system, all objects stayed in place. In this situation scaling along each object's own axes center was advantageous. However, if you want to scale a group of objects as a whole entity, e.g, if you want to scale a house with all its elements, then you have to scale the whole group from a single reference point, otherwise all elements would grow at their own positions and overlap each. Why is Scaling along World Axes only available in Modeling Mode? In an animation you always need a traceable reference point as well as reference axes that are used to perform the scaling operations recorded in the keyframes. These reference axes systems are always defined by the object's own object axes or the object axes of a parent object, if movements are inherited in a hierarchy. Scaling of Analytical Objects Analytical defined objects will only be scaled, if this operation does not conflict with the mathematical description of the object's shape. This is made clear by a sphere, for example, which if scaled along the X axis is no longer a sphere, but becomes an ellipsoid that can no longer be described simply by its center and a radius. All objects can be increased or reduced symmetrically, of course. Furthermore, analytical cylinders can be scaled symmetrically in their base as well as along their longitudinal y-axis. If an analytical object is scaled in a group with other objects and the scaling would deform the shape of the analytical object then it will only displaced with the scaling movement of the group. Scaling along the Object Axes Select the "Scale along Object Axes" button in the tool-window in order to scale along the object axes system belonging to each object. In this scaling mode always the object's own axes center serves as origin for the scaling - in hierarchies the object axes center of the top most parent of the marked selection will be used. Via the "Mouse Lock" buttons you can choose an individual axis or a plane in which you want to perform the scaling. For instance, if you want to lengthen a cylinder, you would activate the longitudinal "Y"-object axis of the cylinder but if you want to enlarge only the base of the cylinder, you would activate the "XZ"Mouse Lock button below the "Y" button - then the scaling takes place only in the xz-plane standing perpendicular on the y-object axis. If the last button with the box is active, then objects are evenly enlarged or reduced in all three dimensions. Example: The illustration shows a hub with 12 embedded spokes. The spokes are somewhat too short and we want to lengthen them along their longitudinal y-object axis, without broadening the circular base of the spokes. When constructing the cylindrical spokes, the object axes of each spoke have been moved already down to the base in the center of the hub. That simplifies the following operations. First we select the "Y"-axis button as the current "Axes of Scaling". Then we mark the 12 spoke cylinders we want to lengthen. Thereupon the object axes systems for all marked objects are drawn. For clarity, only for the reference object the complete axes system is drawn, for all other objects only the selected scaling axis is painted. In the illustration you can see the green y-axis running along each of the spoke cylinders. Now you just need to click into a viewport window and move the mouse - the spokes will virtually grow out of the hub or sink back into it. Box-Dimensions With help of the x-, y-, and z-parameters in the "Box-Dimension" field you can exactly define the dimensions of the bounding box surrounding the marked reference object. If the object has been rotated and is not aligned to the world axes anymore then change to "Axis of Scaling - Scale along Object Axes" mode, so that the bounding box is aligned to the axes system of the marked object. When you input the dimensions for analytical objects, then all input values will be automatically completed, so that the mathematical description of the object is maintained. If, for instance, you enter the base radius of an analytical cylinder by changing the value for the x-dimension, the z-dimension will be adjusted automatically. Specify a Scaling Factor When scaling a selection with the mouse in the viewports you can simultaneous read the scaling factors from the "Scaling Factor" toolbox. However, you can also input the scaling-values for the individual axis directions directly via the x-, y-, z-parameters in the "Scaling Factor" toolbox. Then operate the button, to cause the change in size. The button in the tool-window beneath the scaling parameter resets all values to 1. Scaling in Hierarchy (only in Modeling Mode) Usually hierarchical subordinated child objects follow all movements of their parent objects. However, in Modelling Mode you can switch off this automatism temporarily. If you activate the right button of the two buttons shown above then you can change the size of a parent object alone without scaling the children of the parent with it. In Animation Mode this function is hidden - all transformations of parent objects will be inherited to their children again. Effect on Following Path (only in Animation Mode) Every time you scale an object in Animation Mode then automatically a new Scale key is created, or, if a key already exists on this frameposition, it is updated. Now, if you are moving backwards in the animation because you want to adjust the size of an object at a particular frame position, then you can decide via the two buttons depicted above, if the size of the object is changed only at the current keyposition or if the object is scaled also with the same scaling factor in all following keyframes. The second option is more or less the same as if you scale an object in Modeling Mode - the object changes its size relatively for the whole remainder of the animation. Scaling Procedural Textures or Bitmaps (only in Modeling Mode) On operating the "Scale" index tab, the content of the tool window changes to that indicated in the illustration above. Instead of a bounding box enclosing the object, now a grid is drawn representing the texture or bitmap and its axis. Use the "Select Texture/Bitmap"-select box to decide whether you want to scale the texture axis for a procedural texture or the texture axis for one of the bitmaps assigned to the object. Supposing you have chosen a bitmap for the selected object in the material dialog then this bitmap will also show up in the "Select Texture/Bitmap"-select box and you can select it for work. A grid similar to that depicted above appears. The dimensions of the grid correspond to the dimensions of the bitmap. It is now very simple to adjust the proportions of the bitmap to suit the size of the object. The scaling is carried out as usual with the mouse directly in the viewports or with help of the X-, Y-parameters in the tool window. The Z-parameter is not required this time, since we only want to scale a 2D picture. Accordingly you can only scale along the two X, Y-picture axes, too. Once the size of the picture is adjusted to the object's dimensions, you can easily move the bitmap in the "Move"--work mode. As you can see in the illustration above, even the scaling of procedural textures is possible. Once you have correctly adjusted the size relationships for a procedural texture in the material dialog, you do not need to repeatedly change back into the material editor to adjust the total size for all parameters. Instead, you can easily scale the procedural texture grid in the "Scale"- work mode, resulting in scaling of the entire texture pattern. Scaling a Picture to the Original Size If you choose the <1:1> button, a scaled bitmap will be rescaled to its original size. This is primarily of importance in the particular case when you want to change the projection mode in the material editor to a cylindrical or spherical projection. For example, in the case of a cylindrical projection a bitmap is scaled automatically so that a scaling factor of 1 in the X-dimension would result in a picture perfectly wrapped around the object, with the beginning and end of the picture exactly meeting each another. A value less than 1 would lead to a label effect and a value exceeding 1 would result in overlapping of the picture. The same applies to spherical projections. A value of 1 in the X- as well as in the Y-direction would result in the bitmap wrapping to exactly fit the object. Lower values would again lead to a label effect. With these projection modes you should proceed from the original size, only the reduction in Y-direction for the cylindrical projection being meaningful, or a reduction in both directions when you want to obtain a label effect in cylindrical and spherical projections. .topic 21 - Menu "Edit - Rotate Object/Texture" Modeling- versus Animation Mode Differences between Modeling Mode and Animation Mode See - "Moving Objects or Textures" Rotating Objects, Facets, a Point Selection or Object Axes The tool window gives you all the methods to rotate a marked object selection. This can be done either with help of the mouse or by directly inputting rotation angles. To rotate the selection with the mouse in a viewport window, move the mouse to the right or top while pressing and holding the left mouse-button. If the mouse is moved to the left or below the rotation is in the opposite direction. The rotation-angle about which you rotate the selection is displayed simultaneously in the "Angle of Rotation" box. There, you can also input the rotation-angles directly. Then operate the button to apply the rotation. The button next to sets all angles back to 0. Rotation About the World-Axes Select the -button in the "Axes of Rotation" box to rotate about the world axes. The world axes are lined up parallel to the X, Y, Z axes of the spatial cube. Using the "Mouse Lock" buttons you can again restrict the mouse rotation to certain directions. If the left button is active, objects can be turned about axes that are horizontal and vertical in the viewport. Which axes those are, naturally depends on the view in the viewport. If the button with the horizontal arrow is active, then only rotations about axes that are vertical in the viewport are executed. If the button with the vertical arrow is activated, it follows that the rotation is always about axes that are horizontal in the viewport. If the last button with the circle is selected, then the rotation is always about an axis that points directly out of the viewport window. For example, with the Front view it is the Z-world-axis. Try it out, the principle is quickly recognized. Reference Point of Rotation If you rotate about the world axes you have two options for the reference point that defines the center of the rotation: Crosshairs (only in Modeling Mode) - If you select this button, then crosshairs appear in the viewport windows. Only in Modeling Mode, you can freely move them with the mouse to specify the reference point for the rotation. To "grab" the crosshairs simply click with the mouse into the area between the 4 arrows in the center of the crosshairs. After positioning the crosshairs - if you want to rotate the marked selection again with the mouse - you have to leave the area between the 4 arrows of the crosshairs again. Rotation always takes place about the object axes center of each marked object. In hierarchies the object axes of the top most parent of the marked selection will be used. The crosshairs option is not available in Animation Mode. In an animation you always need a traceable reference point and corresponding rotation axes you can refer to in the course of the animation. These reference axes systems are always defined by the object's own object axes or the object axes of a parent object, if movements are inherited in a hierarchy. For instance, you can use Group Objects in an animation to serve as movable reference points for rotations. Example: In the left picture you see 4 wheel objects that were selected for a rotation. For the reference point of rotation the crosshairs option is selected and the crosshairs moved into the center of the 4 wheels. Now, if you click into the viewport and move the mouse, the selection will be rotated as a whole about the crosshairs center (picture in the middle). The illustration on the right shows the result, after rotation about the object's own axes center has been selected instead. All marked objects rotate simultaneously about their own object axes center. Rotation About the Object-Axes Select the -button in the "Axes of Rotation" box in order to rotate about the object axes belonging to each object. Then, via the "Mouse Lock" buttons you can choose the X-, Y- or Z-axes to enable a rotation with the mouse about this object-axis only. Reference Point of Rotation Axes Centre of the Reference Object (only in Modeling Mode for rotating a selected group of objects about the objects axes center of the marked reference object.) This option - like the crosshairs in World Axes mode - is only available in Modeling Mode, because in an animation only the object's own axes system (in hierarchies also the object axes of parent objects which inherit their movements to their children) can serve as a traceable reference point for rotations. This connection is essential to maintain the hierarchical independence of each object. If, in an animation, you want to rotate objects about another reference point than the own object axes or the object axes of parent objects, link your objects under a Group Object and use this Group Object as a movable reference point for your rotations. Rotation always takes place about the object axes center of each marked object. In hierarchies the object axes of the top most parent of the marked selection will be used. spatialple: In the illustration you see a model of a space station. All elements of the space station are hierarchically subordinated to the big spherical object in the center. We want to rotate the space station about its longitudinal axis. This would be almost impossible, if we had to perform this with rotations about the world axes, which are aligned parallel to the spacial cube of the CyberMotion world. But if you choose the object axes for the "Axes of Rotation" then you simply need to select the y-object axis of the spherical parent object of the space station hierarchy and the space station will be rotated automatically about its longitudinal axis. Rotating about object axes systems is really essential, especially in all situations, where the animation of joints is involved, for instance, when animating robots or the skeletons of characters. Line up object to world-axes The button rotates all marked objects so that the object-axes line up with the world-axes. This matches with the initial orientation the object had when first generated. Rotation in Hierarchy (only in Modeling Mode) Usually hierarchical subordinated child objects follow all movements of their parent objects. However, in Modelling Mode you can switch off this automatism temporarily. If you activate the right button of the two buttons depicted above then you can rotate a parent object alone while all children of the parent stay in place. In Animation Mode this function is hidden - all movements of parent objects will be inherited to their children again. Effect on Following Path (only in Animation Mode) Every time you rotate an object in Animation Mode then automatically a new rotate key is created, or, if a key already exists on this frameposition, it is updated. Now, if you are moving backwards in the animation because you want to adjust the alignment of an object at a particular frame position, then you can decide via the two buttons depicted above, if only the selected object is to be re-aligned or if the whole following movement path is also rotated together with the object. This function is extremely useful every time when you want to transfer animation data from one object to another. Suppose, for instance, you have animated a walking character. You want to create a second copy of this character walking in another direction. To achieve this you only have to copy the character - all animation data will be copied with the model data. Afterwards you simply need to move the copy of the character in "Move Object" mode together with its movement path to a new starting position. Then - here in "Rotate Object" mode - you just rotate the figure, again together with its movement path, so that it faces into a new direction. If you play now a preview animation you can see that the second figure really walks with the copied animation data from the first character from a new starting point into a new direction. You did not even had to step into the complex animation routines a single time to achieve this. Example: A Boeing has been moved about 3 key positions straight forward. We moved back in the timeline to the second keyframe. Now we activate the second button "Rotate Movement Path with Selection". Then we rotate the Boeing anti clockwise about 45° together with its movement path. In the illustration you can clearly see how the complete following movement path is bend with the rotation and that the plane is moving now in a new direction. What is demonstrated here with a single plane object can also be applied to whole hierarchies. For instance, to move a character hierarchy that consists of many bones and the skin object - along a complex movement path, you could first animate a complete simple walking cycle of only two steps. This basic walking cycle you could copy in the animation editor multiple times one after the other, in Relative Mode, to let the character walk a considerable part of the way straight on. Then, here in the "Rotate Object" menu, you move back to the starting point of the walking sequence. Move forward in animation time in several steps and rotate the character together with its remaining movement path again and again in the desired direction until the movement course has been completed. See the example in the illustration above. An animated character walks straight on along a street. Then, to let the figure walk over the street, the character and its movement path simply has been bent towards the street. On the other side of the street, the remaining path has been bent again so that the character walks again parallel to the street. All in all just a few seconds were spent to guide the figure over the street. Direction- and Angle of Rotation in an Animation (only in Animation Mode) If you rotate an object in Animation Mode, then automatically a key is generated. This key records a rotation axis, about which the object is rotated, so that the object axes that were recorded in the previous keyframe will align with the object axes in the current frameposition, and a corresponding rotation angle. There are always to possible directions to perform a rotation, e.g., the pointer of a clock can rotate clockwise or anti clockwise. When a keyframe is generated always the shorter angle of the two choices will be recorded. In Animation Mode you can read this angle in the "Angle to Previous Key" box. You can edit this angle, for instance, to reverse the rotation or to let the object rotate multiple times around itself. Example: You want to animate a propeller so that it turns a hundred times around its longitudinal axis. Select the propeller object and choose the corresponding object axis for the rotation. Then use your mouse to rotate the propeller - you need only rotate it a little bit, so that a key is generated with the rotation axis and an initial angle. Only then, when a key has been already recorded, the "Angle to Previous Key" parameter will be enabled and you can input a new angle. For each revolution you have to add 360° (max. 100000°), so for a hundred revolutions 360° * 100 = 36000° is the correct value. Note - if you move back in time before this keyframe and there you rotate the propeller again, then automatically new keyframes will be calculated, first with the new rotation axis and angle to rotate the object from the previous keyframe orientation to the current orientation, and, simultaneously, a new axis and a corresponding rotation angle is calculated for the directly following keyframe, to rotate the object from the just changed orientation in the new keyframe to the next following keyframe orientation. This simply means that the previously made angle adjustments for the 100 revolutions of the propeller would have to be re-entered for the following keyframe again. Reverse Rotation As mentioned before, when calculating the angle that rotates an object from a previous keyframe orientation into the current orientation, always the shorter angle of the two possible rotation angles is chosen. If, for instance, you rotate an object clockwise about 270° and then start a preview you will recognize, that the object will rotate anti clockwise about 90° instead. To correct this you can input the reverse angle directly for the "Angle to Previous Key" parameter or just press the button, to let the program do the work for you. The function will even convert multiple rotations correctly into the opposite direction. Rotating procedural textures or bitmaps On operating the "Rotate"- index tab the content of the tool window changes to that indicated in the illustration above. Instead of a bounding box enclosing the object, a grid is now drawn representing the texture or bitmap with their corresponding texture axes. Use the "Select Texture/Bitmap"-select box to decide whether you want to rotate the texture axis for a procedural texture or the texture axis for one of the bitmaps assigned to the object. Supposing you have chosen a bitmap for a selected object in the material dialog then this bitmap will also show up in the "Select Texture/Bitmap"-select box and you can select it for work. A grid similar to that depicted above appears. The dimensions of the grid correspond to the dimensions of the bitmap. The size of the grid and the respective bitmap can easily be adapted to your needs in the "Scale Texture" work mode. To be able to reposition the texture change to the "Move Texture" menu. Here in "Rotate Texture" work mode you can adjust the direction of the bitmap projection by rotating the texture axes. The rotation will always be executed about the center of the texture axes. You can rotate about the world-axes or about the texture axes just the same as when rotating objects. The direction of the projection can easily be verified by the direction of the Z-axis, which is perpendicular to the bitmap grid. On the basis of the Y-axis you can determine where the top or the bottom of the bitmap is located. Finally, the X-mark on the X-axis shows the right side of the picture. In the illustration above you can see an example for a bitmap that should be projected vertical along the cylinder. At first the bitmap is aligned horizontally. After rotation through 90 degrees around the Z-bitmap texture axis the bitmap is correctly projected along the cylinder's longitudinal axis. Likewise, you can also rotate procedural textures. Simply choose the corresponding texture axes system in the select box. All buttons and parameters have the same functions as in "Rotate" work mode. The button realigns a rotated axes system along the 3D world-axes. .topic 94 If you have already studied our tutorial concerning the assembly and animation of a little robot model then you actually know everything regarding Forward Kinematics - a hierarchy of objects with their object-axes serving as joints and points of rotation. Then, rotating an object about its object-axes will cause all hierarchical subordinated objects to follow the rotation. This is called Forward Kinematics. Inverse Kinematics can simplify the positioning of several joints simultaneously even more. You need only to pull on a finger, for example, to stretch the complete arm of a 3D-model - similar to a real jointed puppet. Furthermore you can restrict the rotation angles for each joint by setting limits for each individual rotation axes (Degrees of Freedom), thus preventing the model from unnatural twisted movements. Forward Kinematics (FK) or Inverse Kinematics (IK) is applied only in "Move Object" work mode. In the tool window, beneath the 3 selection buttons for selecting objects, facets or individual points there are 2 additional buttons for the Forward Kinematics and the Inverse Kinematics working modes. You can change anytime from the normal displacement work mode to the Kinematic work modes - for example move an object independently from its parent objects and right after that change into IK mode to pull at this object and with it at all parent objects. In practice: (You can find corresponding sample files to all examples in the folder "\projects\ik\".) First we load a simple hinge object "root" with a lever object "arm_1" attached to it (project file "ik_1_joint.cmo"). The hinge serves as root object for the hierarchy and is not movable itself. To prevent the hinge from taking part in the rotation movements of its subordinated joints we mark the hinge object and then, clicking with the right mouse button into a viewport window, call up the popup selection and select the "properties" entry. In the upper part of the "Properties" window some basic information about the selected object is provided, while in the lower part all settings for Kinematics can be arranged. Switch on now the function for our hinge object "root". This results in a fixed hinge that will not rotate, when pulling at subordinated objects belonging to it. Next we select object "arm1". The rotations of the objects are always executed about a defined pivot point, which is identical to the focus of the object-axes. So first we have to move the object-axes of the "arm1" object to the joint position, about which the individual rotations are later executed (shown in the right illustration above). Select therefore in "Move Object" work mode the object axes selection. Now you can easily move the axes system with the mouse to the desired joint position. Still, the lever object "arm1" has to be hierarchal subordinated to the hinge object "root". Call up therefore the Select Objects dialog. Click with the left mouse button on "arm1" . While holding down the button a box containing the name of the object appears. Drag the box over the "root" object until a tool-tip indicating "Link" appears. Release the button and "arm1" is now displayed to the right side of the "root" in the window and is subordinate to that object. Now we can return to "Move Object" work mode for our first test on Kinematics. Activate the Inverse Kinematics work mode by selecting the button. (Top view) If you select now "arm1" and pull at the object, it really tries to follow the mouse movement by rotating about its object-axes. However, it rotates about all 3 object-axes and therefore it breaks out off the hinge. Consequently we have to restrict the possible rotations to that axis, that goes straight through the openings of the hinge object, in our example this is the y-axis of the "arm1" object. Use the function to return to the initial position of "arm1". Then call up again the "Properties" dialog. Kinematics and Degrees of Freedom Use the "Degree of Freedom" (DOF) parameters to adjust for each rotation axis a lower and an upper limit within that the object can be rotated. These limiting angles are always based on the initial position of the object, when it was linked to a hierarchy. So you can first position and align an object, then link it to the hierarchy and last insert suitable DOFs based on that initial position. However, you can do it also the other way round. When you first link an object to a hierarchy, moving and rotating it afterwards to a suitable position, then you can set this position as initial position just by operating the button. The settings for the DOFs apply then to this base position. When an object has been rotated using the Kinematics functions, you can see beneath "Angles of Rotation - Deviation from Base Position" the angle values, about which an object has been rotated from the base position. If you operate the button they will be set to zero again. However, if an object has been already animated over several key frames, then you can't change the initial base position any more. There is still the possibility to rotate objects regardless of the DOFs assigned to them. To do that, just execute a simple rotation about world- or object-axes in the "Rotate Object" work mode. These rotations are wholly independent from the Kinematic settings - neither the "Angles of Rotation - Deviation from Base Position" nor the DOF parameters are affected by it. Let's go back to the hinge and the lever object "arm1". The initial DOF values are set to ±180° for each object, which is equivalent to an unrestricted full rotation. We want to lock rotation about the x- and the z-axis for "arm1", so we just enter ±0° for the DOFs along the x- and z-axis. But we also want to restrict the rotation about the y-axis somehow, so that the lever is moved only until he hits the base of the hinge and not goes through it. For that purpose we set the DOFs for the y-axis from -90° to +90°. After leaving the Properties dialog you can immediately observe a change in the drawing of the object-axes. Locked axes are set off against the unlocked axes in a grayed style instead of the usual axes colors. Pull now again at the level object "arm1". It rotates now only about the y-axis and the movement stops in the vertical position at the top and at the bottom respectively. Now let's add a little bit to the complexity of the model by adding another joint object (file "ik_2_joints.cmo"). Select the new object "arm2" and call up again the Properties dialog. Lock again the x- and z-axis. For the DOFs we enter this time a little bit greater range of ±120°, so that the lever "arm2" can move exactly on both sides until he hits the base of the lever "arm1". Forward Kinematics versus Inverse Kinematics Let's focus again on the difference between the two Kinematic modes. With Forward Kinematics you select an object in a hierarchy of joints. When you move your mouse in a certain direction, then the selected object tries to follow that movement by a rotation around its object-axes. All objects subordinated to this object will execute exactly the same rotation, similar to the rotation of an object group about the object-axes of a reference object. The idea behind Inverse Kinematics is a different one. You mark an object at the end of a chain of jointed objects as reference object, in our example "arm2". Then you select with a second mouse click, while simultaneously holding the -key pressed, the top most object in the hierarchy that you want to include in the movement - here "arm1". The object-axes of all marked objects - except those which had been locked for Kinematics - are drawn. Additionally a thick line between the individual joints is drawn to emphasize the connection of all involved joints. If you give now the direction of movement with your mouse, then "arm2" tries again to follow the movement, only this time all marked joints participate in the movement. This way the two levers of our model can be stretched or bent with simple mouse movements in the respective direction. Finally again our example from the robot tutorial (file "projects\ik\ik_robot.cmo) with all DOFs already entered. If you select the tongs object at the head of the robot and pull at it, then a movement will be calculated that involves rotations about 3 hinge joints and 1 rotational joint at the base of the robot at the same time. The pair of tongs is locked for Kinematics and therefore will not it self be rotated, but it has unlocked parent objects serving as joints and the movement is passed on to them. This is always of importance when you subordinate objects to a parent object serving as a joint, but don't want the subordinated object to be part of the chain of joints, e.g. if the added object is only an addition to the construction of the parent object. .topic 59 - Menu "Edit - Edit Object" You can make simple changes in the appearance of objects by selecting individual facets or points and moving, scaling or rotating them. With the functions in the Edit Object toolbox you have the ability to go much further in your object manipulation and modeling. Most functions are applied to selected facets or points of marked objects, so, it is a good idea to switch on the "View - Points" entry in the menu bar to highlight the individual points from which an object is constructed. Only when this option is switched on are detached, unconnected points clearly visible, too. Functions: Facet Extrude Add Points and New Facets Delete Individual Points and Facets Boolean Operation Detach Object Triangulate Facets Magnetic Deformation Smooth Object Invert Normals Selective Facet Interpolation .topic 86 Menu: Edit Object - Facet Extrude Select the tab button at the top of the tool window to activate the "Facet Extrude" function. With help of the "Facet Extrude" function you can conveniently add new facets to objects just by selecting facets on an object and dragging them out or into the object. This operation is carried out in real time with the mouse directly in the viewport window or alternatively manually through the parameters and the button in the "Facet Extrude" box. Let's see an example: This simple box was generated from the primitives menu and than both front facets were selected. Activate the "Facet Extrude" tab. Move the mouse over a viewport and click with the left mouse button in the window. Hold the mouse button pressed and move the mouse a little bit to the right. By doing this, an additional surface segment is automatically generated, which is dragged out of the object with the movement of the mouse. The "Off" parameter in the "Facet Extrude" box shows the distance of the new segment to the original surface. Setting the "Bevel" parameter to other than zero will result in beveled sides of the new generated segment. The picture shows the box with a bevel value set to 20 before applying the Extrude function. This picture shows the same process, except that the mouse was moved to the left and therefore the new beveled facet segment was dragged into the object instead out of it. This time a "Facet Extrude" operation was performed directly on the surface, with beveling but without displacement in the depth. To achieve this you have to set the "Off" value to zero and the "Bevel" value to something other than zero. After that, simply press the "Extrude" button. New facets are generated directly within the old ones and you can see that this time the "Bevel" value determines how far the new facets are displaced within the old ones. After that you can do a normal "Facet Extrude" operation again, to drag new segments in or out of the object. This is how the box appears, if you repeat the procedure for all 6 sidewalls. See also our tutorial: Facet Extrude; in a few minutes from a box to a plane model .topic 88 Menu: Edit Object - Add points and new facets Add Points Generating new points and later connecting them to form facets can extend individual object constructions. Completely new constructions are also possible. You could, for example, draw a simple triangle in the extrude-editor and create an extruded flat triangle object. Then add new points to the object relating to your construction requirements and connect them to form new surface facets. To generate new points for an object you must first change to the corresponding work mode by selecting the tab in the tool window. Then mark the object you want to edit and activate point selection by pressing the button. In the viewport window large crosshairs appear. Click within the 4 arrow buttons at the center of the crosshairs to grab it, then hold the left mouse button pressed and drag the crosshairs into the position where you want to generate the new point. Make use of the "Mouse Lock" and the "Snap" functions to support precise positioning. The x-, y- and z-coordinates of the new point can, however, also be input directly through the keyboard. To create the new point, just operate the button. Individual, unconnected points that are not components of a surface are only visibly and highlighted if the Point- entry under "View" in the menu bar has previously been switched on. Check on this if the newly generated points are not seen on the screen. Connect Points After you have produced new points, you will want to connect them to form new triangular facets. Simply mark three points and then operate the button to create a new triangular facet from them. In the foregoing example you see a basic square object. A point has been generated in each of the 4 corners of the window with the "Add Point" function. Four new facets are generated with the help of the "Add Facet" function. In each instance one corner point of the inside quadrilateral has been connected with two of the newly created outer corner points. .topic 310 Menu: Edit Object - Delete individual facets or points Deleting and Deleting respectively With help of these buttons may selected individual points or facets be deleted from an object. On the left side of the above illustration is a flat frame object on which several rows of points are selected. If all the points of a triangular facet have been selected the facet is drawn shaded. In the illustration this results in two shaded strips in the frame. You have the facility to delete either all selected points or selected facets only. If you choose the function "Delete" , this results in the object shown in the middle illustration in the foregoing example - it is only facets that are deleted. Points that form part of a remaining facet are unaffected. If, with the same points selected, you had instead chosen the function "Delete" , then the object would look as shown on the right in the above illustration. All points selected are deleted and with them all facets that are related to these points. Note! If you delete so many points or facets of an object that no single facet remains, then the relevant object is automatically deleted. Deleting Unconnected Points Through erasing points and facets, points sometimes remain that do form a corner of any the remaining facets (as seen in the illustration on the upper right). The function also sometimes generates superfluous unconnected points. These can all be removed with function "Delete" . Note! Unconnected points are only shown if the entry "Points" (under "View" in the menu bar) has been switched on. .topic 90 Menu: Edit Object - Boolean Operation If you choose the button in the "Edit Object" tool-window, a popup selection opens with several entries for joining objects in different ways. Those entries are found also in the popup selection, which opens when clicking with the right mouse button in a viewport window (see picture above) and again in the "Object Selection" dialog, thus enabling access to these functions in every work mode. A Boolean Operation (named after the English mathematician George Boole) describes the connection of 2 objects with simple logical operators like "AND" or "OR". CyberMotion makes use of this logic when joining two overlapping, facet based objects, which results in complex structures that otherwise would be very difficult to construct. For instance, it is very easy to add a hole to an object, just by placing a cylinder at the desired position for the hole and then operate a Boolean Subtraction "object" minus "cylinder". To apply a Boolean Operation, first mark the two overlapping, facet-based objects. Analytical objects and NURBS cannot be joined, but you can convert NURBS to facet-based objects and after that a Boolean Operation will be possible. When a Boolean Subtraction is applied, the resulting object will adopt the name, material and animation data of the object from which you subtract the other object. Otherwise the data is adopted from the selected reference object. If neither of the two marked objects is selected as a reference object, than the data of the object located higher in the hierarchy list will be used. Functions: Join Objects - No Boolean Operation Boolean Union Boolean Subtraction Boolean Intersection Problems using Boolean Operations Only Triangulate Intersection Join Objects - No Boolean Operation This function merely serves to manage 2 objects as one object. In many cases it is simpler to manipulate objects that are composed of many different parts (but of the same material) as a single object - as you continually save yourself work on processing a group object. The object construction itself remains unchanged. No Boolean Operation is applied - just 2 objects being referenced as one single object. Boolean Union The Boolean Union function combines 2 objects by way that removes all facets and points of each object that lie completely within the other object. In the process, the overlapping facets will be subdivided along their intersecting edges to create a seamless transition along the overlapping objects. This is best shown in wire frame mode. Two overlapping spheres are melded with a Boolean Union operation. The superfluous inner parts were eliminated and the additionally generated facets and points for the joining edges are clearly visible. Boolean Subtraction The Boolean Subtraction uses one object as a tool to cut a shape from the other object. Above you see an example for an operation "Box" minus "Torus". This operation is not restricted to simple primitive shapes, even complex structures can be combined. The picture shows a Boolean Subtraction of a 3D text object "e" from a "box". However, because of the intensive calculation process you should keep objects as simple as possible. If, for example, you want to subtract a complete text string of high resolution 3D text from a box, I would recommend to do this character by character and afterwards join the separate objects. Another example of use is to produce cross-sections of objects. In the picture you can see a primitive object "box" that was subtracted from a "Torus", so that the "Torus" was cut into two parts. The points of one half of the "Torus" were then selected and transformed into an independent object by means of the Detach Object function. Boolean Intersection A Boolean Intersection calculates a new object from the intersecting parts of the two objects. Problems using Boolean Operations When calculating a Boolean Operation there is one basic problem - to determine which facets or points of an object lie inside or outside the other object. With primitive convex objects this is very easy, but with non-convex objects calculations can go to great lengths. Although Boolean Operations are intended to be for objects with closed surfaces - since only then you can determine exactly, which parts of an object lie inside the other object - you can also apply Boolean Operations to open objects. One criterion, whether a point lies inside or outside, is to check if the point is located underneath a surface facet of the other object. This can be calculated from the facet's normal (a vector standing vertical to the surface). With closed objects this surface normals always point to the outside, but with open objects it depends on the view of the user, what is intended to be inside or outside. If the result of a Boolean Operation with an open object is unsatisfying, you can try to repeat the operation after inverting the normals of the object. The picture shows a box with an open front. A cone is to be subtracted from the box to create a funnel opening. But the operation results only in an undesirable hole. This shows the same situation except that we have inverted the normals of the box before applying the Boolean Subtraction. Now it worked out well and the funnel was inset into the box. Only Triangulate Intersections This function is the preliminary stage for all Boolean Operations. The process subdivides the overlapping facets along their intersecting edges to create a seamless transition along the overlapping objects. But after applying this function no Boolean Operation is executed, thus enabling the user to select by hand the points and facets he wants to delete. Afterwards the objects can be joined again using the "Join Objects" function. .topic 340 Menu: Edit Object - Detach Object You can detach parts from one or several selected objects and then edit these parts as objects in their own right. Once again, you must select individual facets of the objects. Then choose the function. The marked part is then extracted from the object and managed as a new object. For each new object a dialog appears in which you can enter a name for the object - preset is always the name of the original object with a "2" appended. In the illustration on the left you see a sphere with points selected. The marked faces enclose a quarter of the sphere. Operating the function changes the sphere as shown in the picture on the right. It now consists of two distinct objects (moved slightly apart for clarity). .topic 330 Menu: Edit Object - Triangulate Facets All marked faces of an object are broken into 4 smaller triangular facets by triangulation so that you can refine the structure of an object to improve its form or to work on it in greater detail. On the left of the illustration you see a frame object with selected facets and next to it the same object after the function has been applied. .topic 89 Menu: Edit Object - Magnetic deformation Objects can be distorted with the help of the magnet function. To apply this function you must first change to the work mode by selecting the corresponding tab in the tool window, then mark the object you want to edit and activate point selection by pressing the button. The function only works on selected points, so first mark all points you want to be exposed to the deformation. An annular object appears on the viewport window, which you position with the mouse while holding down the left mouse button. The different rings of the object give the area of influence of the magnetic field. The narrower the rings are the stronger is the magnetic field. The further apart the rings the weaker the magnetic field. For points that lie within the outer ring the distortion is only very weak, while near the center it is very strong. The strength relating to the extent of the magnetic field can be adjusted through the "Strength" parameter between values of -1 and +1. The higher the value is, the stronger the magnetic field. Negative values have a magnetic attraction, positive values, consequently, a magnetic repulsion. Activating the Magnet - button + left mouse button To produce magnetic distortion, position the magnet-object at the target point then simultaneously press the button and the left mouse button. You can also, while pressing the button, move the mouse, so that the influenced points also continue to move. The illustration shows a flat frame object. All points except those lying along the edges have been selected for treatment. The magnet is switched on and positioned over the middle of the frame. In the illustration on the right you see how the object appears after operating the key and left mouse button. .topic 320 Menu: Edit Object - Smooth Object This function smoothes coarse structured surfaces of facet-based objects by transforming them to a higher resolution. The facets are subdivided and new points are interpolated to round off the surface. To apply this function, simply mark the relevant objects and select the "Smooth Object" button. Here again our example from the "Facet Extrude" tutorial. On the left side the completed object and on the right side the same object after applying the smooth function twice. But be careful with this function, as each smoothing operation will significantly increase the number of points and facets in the object. On the other hand each work-step can be canceled or repeated with the Undo/Redo functions, so just try and play around with it a little bit. .topic 350 Menu: Edit Object - Invert Normals The object normal is a vector standing vertical on each facet of an object. It is important on the one hand for the visibility calculation and on the other hand for the light incidence and interpolation calculations of the facets. When you import objects of foreign formats (e.g. of the type [DXF] or [RAW]) that do not contain the necessary information for aligning the normals, or if you edit and add new facets for a selected object, CyberMotion attempts to automatically assign sensible normal alignments to this object/facet. However, this is not always possible. For instance, in many cases the normals are uniformly aligned when the object is imported, but it could happen that the normals are then all shown pointing to the interior instead of to the exterior. In this case you can use the function to reverse the normal alignments of all marked facets. To ease work you can switch on the "View/Normal"-entry in the menu bar to show all normals of an object. In the illustration above you can see two spheres, the left one with normals correctly aligned to the outside and the right one after applying the function with normals pointing to the inside of the sphere. See also: The surface normal .topic 360 Menu: Edit Object - Facets-Interpolation At the bottom of the toolbox are two more buttons, with the help of which you can switch off or on again interpolation for selected facets. This function is only relevant if Interpolation is chosen in the Material-dialog for the depiction of the object. For further explanation of the purpose of this function, as well as examples see: Material dialog - Object Properties - Interpolation. .topic 630 The Principle of Skeletal Deformation (Skin and Bones) Without skeletal deformation the production of modern animation films and especially the animation of characters would be impossible. The skin and bones technique uses a skeleton which is subordinated to a corresponding skin object enveloping the skeleton. Now, every time you move a bone of the skeleton, the particular part of the skin previously assigned to that bone will be deformed and move with the bone - the character awakes to life. What was usually left to awfully expensive animation studios before, you can do now as well at home with the CyberMotion 3D-Designer. A typical skeleton for a human model. The pelvic is used as a root bone from which all other bones branch off. The root bone, being the top most parent of the bones hierarchy tree, can be used to move the whole skeleton to align it within the skin object. Then, the root bone again is subordinated to the skin object. If you want to move, rotate or scale the whole character, for instance, to move it to a new starting position, then you always have to select the characters skin - the subordinated skeleton will automatically follow each movement of the skin. But if you just want to move a part of the body, for instance, if you want to bend an arm, then you have to select the corresponding bone of the skeleton to rotate the bone together with the skin assigned to it into the bend position. Deformation of the arm by rotating the bone of the forearm Skeletons can be easily created in the "Edit Skin and Bones" work mode. First you generate a root bone. Additional bones can be added simply by selecting existing bones and pulling new bones out of the tip of these bones. Even complex skeleton hierarchies can be build that way with a few mouse clicks. Afterwards you link the completed skeleton under a normal polygonal- or NURBS-object. By doing this the skeleton will automatically recognize its new parent object as a deformable skin object. Now, in Modeling Mode, the individual bones can be aligned within the skin object. Every bone will influence only a certain area of the skin, so the next step will be to allocate the corresponding skin points to the individual bones. If you want to apply Inverse Kinematics to animate the skeleton, then you also have to define the DOFs (degrees of freedom) to restrict the bone rotations to their natural range. After all these preparations you can start to animate the character. To be able to do this you have to change to Animation Mode. While in Modeling Mode each movement of a bone will be interpreted as an adjustment of the bone's position within the skin, in Animation Mode moving a bone will always produce a deformation of the skin assigned to this bone. Because of this clear distinction between Modeling Mode and Animation Mode you can always, even in the middle of an already set up animation, change back to Modeling Mode - e.g., to add further bones to the skeleton or to redistribute the point assignments - and the character will automatically adopt this changes for the whole animation. Bones can also be used to simulate muscles, e.g. you can realize facial animations by adding several bones under the skin of the character's head and assigning certain areas of the face to each of them. (In commercial animation films up to 60 bones are used to control the expressions of a face alone) Another feature: A bone object that is subordinated in a hierarchy under a normal facet based object will automatically recognize this object as the skin object belonging to it. But you can also do it the other way round. You can link "ordinary" objects under bones, for instance, to link a tool or a weapon under a hand bone, so that it is automatically taken along with the hand's movements. Or may be you want to pack a rucksack on the top of the spine bones. The "Edit Skin and Bones" menu If you select the button to change into the "Edit Skin and Bones" work mode then you change also automatically into Modeling Mode. The menu is divided into 2 pages. The "Edit Bones" page provides all tools to create skeletons while on the "Edit Skin" page tools are presented that allow you to allocate the points of the skin to the individual bones. For clarity, in the "Edit skin and bones" work mode all objects that are neither bone nor skin are drawn dimmed in simple wire-frame depiction. If a skeleton is already subordinated to a skin object then the skin is always drawn as a transparent object, so that the bones underneath can be easily recognized. In "Edit Bones" work mode, always the individual bones are the focus of interest and therefore only bones can be selected for editing. If you click on a skin object to select it, then automatically the top most root bone subordinated to the skin will be selected. In "Edit Skin" work mode also the points of a skin object can be selected to allocate them to the individual bones. Note: Only in "Edit Skin and Bones" work mode skins are automatically drawn transparent. But you can also switch on this option via the "View - Bones - Transparent Skin" menu entry for all other work modes. There are some more options, for instance, to hide skeletons or skins in the viewport depiction. See "Depiction in the Viewport Windows" Creating a Skeleton When you activate the "Edit Bones" page in the tool window, then a crosshairs appears in the viewports. Grab the crosshairs by clicking into the center between the four arrows of the crosshairs, and move it to the position where you want to set the starting point of the next bone. Then simply operate the button to mark this position with a big red point. Grab again the crosshairs and move it to another position. Now a thick red line is drawn from the starting point to the current crosshairs position. To create a new bone running along this line just press the button. Thereupon the name dialog appears where you can give it a suitable name. Since a complex skeleton can consist out of dozens up to hundreds of bones, you should go to the trouble and assign unequivocal names for all bones. The picture on the left shows the starting point and the line dragged out off it. On the right you see the bone created from it. The newly created bone will automatically be selected and the starting point for the next bone moves automatically to the tip of the bone. Now you can easily drag out another bone from the tip of the currently selected bone. Every time you create a new bone by dragging it out of the tip of an already existing bone, the new bone will automatically hierarchically subordinated to his predecessor bone. The number of bones you can drag out of an individual bone is not restricted. You just need to select a bone again in order to drag out another bone from it. The new bone will be inserted automatically into the right place of the skeleton hierarchy. You can also create bones that have no length, for instance, as a parent root bone, from which all other bones will branch off, like in the example with the pelvic bone for the human skeleton at the beginning of this chapter. To create a root bone simply set the starting point at the desired position and then create the bone right away, without dragging a line out of it. Of course its easier to create a suitable skeleton, if the corresponding skin object already exists. Then you only need to draw and create the bones along the individual limbs. Allocate the Skeleton to a Skin Object As explained before you only need to hierarchically subordinate the final skeleton under the corresponding skin object in the Select Object dialog. There, in the object selection window, simply grab the root bone of the skeleton with the mouse and drag the whole skeleton branch onto the skin object. After that the skin object will automatically recognize its new children as a hierarchy of bones that can deform the skin object and vice versa. However, the surface points of the skin have still to be distributed to the individual bones of the skeleton. Therefore, switch now over to the "Edit Skin" page of the tool window. Allocating Skin Points to Bones and Assigning Point Weights At the top of the tool window now two additional "Selection" buttons are available: Activate this button to select a particular bone. Change to "Point Selection" to be able to allocate points from the skin's surface to the currently selected bone. In "Change Weight of Point Selection" work mode you can select points of a previously made point selection to change the weight of individual points. To add a point selection to a bone you first have to select the corresponding bone in "Bone Selection" mode. Then you change to "Point Selection". Now, you can add and remove individual points to the point selection in the same way as you do in other work modes. But in contrast to the other work modes this time the point selection will be saved as a permanent point selection belonging to that particular bone. Points can be allocated to several bones simultaneously, for instance, in the area of joints, the points overlapping the joint can be allocated to the bones on either side of the joint. Those points will be influenced proportionately by the bone movements. The weight by which a point is influenced through the movement of a bone is initialised and distributed automatically between the corresponding bones, but you can change the weight of individual points later. The illustration shows an example of a point selection for an upper arm. For a better differentiation, those points already allocated to other bones are drawn in a separate green color. In the picture the skin points for the forearm and hand have been already allocated to their corresponding bones, so they appear in green. At the bottom of the tool window you can read the number of already distributed points and the number of remaining points. If not all points are allocated this will show up very soon in the animation, since these points will glue to their old positions when all other points will be taken along with the bone movements. Weight of Points The weight of a point determines, how strong a point will be influenced by the movement of the bone it was allocated to. If the weight is smaller than the maximum value of 1, then the point falls behind the movement of the bone. If a point is allocated to several bones at the same time, then the weight will be distributed evenly among these bones, so that the total weight will not exceed the maximum value of 1. In case of character animations all points have usually a total weight of 1, because it makes no sense for individual points to fall behind a straight forward walking character. But, if you use bones to simulate facial muscles, then the bones are to pull with different strength at the allocated points of the face instead of just moving the assigned area back and forth. Initial Weight of Points - Mouse Selection An initial value (0.01 to 1) for the point weight can be set in the "Weight of Selection" box. With this default weight all points are initialised, that you add with your mouse to the point selection. To change the weight of already selected points switch over to the "Change Weight of Point" selection. In this work mode you can only select points from the point selection already allocated to a bone. If you change now the "Weight of Selection"-parameter, then all currently selected points of the selection will adopt the new value. Selecting Points using a Bone Radius Often skin points lie in a certain radius around the bone they have to be allocated to, for instance, the skin points for legs and arms. If you select a bone in "Edit Skin" work mode, then always two capsular shaped outlines are drawn around that bone. You can automatically add all points within that area to your current point selection if you press the "All Points within Radii - Add to Selection" button. The initial weight of the points added to the selection depends on wether they are located within the inner radius or between the inner and outer radius, respectively. The default "Weight of Selection"-value will be valid for all points lying within the inner area of the two capsules. Then, from the inner to the outer radius a weight transition from the default value down to zero is calculated. For this transition four different filter functions can be applied. A high resolution grid is best suited to demonstrate the differences. A single bone is subordinated to this grid patch. A point selection was added using the "All Points within Radii - Add to Selection" function. Inner and outer radius were of the same size, so all points within the capsular outlines were allocated with the same initial weight of 1. Then, in Animation Mode, the bone was lifted slightly above the grid patch. The second picture shows the result - all points allocated to the bone follow with exactly the same displacement. Now we want to test the filter functions. In the "Change Weight of Point" work mode we first reduce the inner radius to a very small area around the bone, so that a transition can be calculated for those points lying between the inner and the outer radius. Operate the "All Points within Radii - Change Weight" button to calculate the new weights for all marked points located within the capsular outlines. Now we change back into Animation Mode. The bone movement is adopted now by the point selection with different weights within the transition area. In the illustration you see the effects of the different filter functions. The illustration above shows some details from a little video demonstrating a facial animation (www.3d-designer.com/en/galery/galery.htm). In addition to the neck and head bones the model is provided with eyes, a set of teeth and some additional bones in order to simulate facial muscles. On each side of the mouth a bone was added to control the corners of the mouth and parts of the cheek. If the point weights have been assigned properly you only need to scale down the bone slightly to "pull" at the corner of a mouth and to conjure up a little smile on the face. To transform this smile in a full laugh a third bone was applied to control the up and down of the lower jaw. You can also rotate the mouth bones a little bit downwards to change from the smiling face to a somewhat morose appearance. Take care, that all bones of the face as well as the eyes and the set of teeth are subordinated to the head bone. Then, if you move or rotate the head bone, the whole head including all subordinated objects and bones will follow this movement. Surprisingly enough, only 6 additional bones were applied to animate the facial expressions in this demo video. (In contrast to commercial films, where up to 60 bones are used to animate the complex interacting of facial muscles.) See also: Animation Tutorial - Character Animation Example files: An example of a character already animated in a simple walking sequence is provided in the projects-folder under "..projects/character/man_walk.cmo". You can also find the pure animated skeleton (without skin) of that scene in the project folder "..projects/character/skeleton_walk.cmo". The model of the character was provided kindly by the artist Stefan Danecki. .topic 80 - Menu "Edit - Deform Object" Deform Objects This function offers powerful and easy-to-use features for deforming facet-based objects. These are fully animated and bitmaps as well as textures will follow the object's deformation like a tight-fitting skin. A number of things should be kept in mind: Deformation is only possible for a complete selected object (or an object branch) rather than selected points/facets. In an object branch, all objects of lower hierarchy will follow the deformations of higher hierarchy objects, but additionally, they can undergo their own deformation. So this function works much like any other complex hierarchical animation - see our tutorial’s robot arm example. Single points cannot be selected and/or deformed. Several types of deformation can be used at once on an object/branch. Deformation is of a temporary nature and can be adjusted or even completely reset by changing the deformation parameters at any time. This function is primarily a powerful extension of the program's animation features. Producing a deformation animation is as straightforward as producing one with object rotation or movement. If you change any deformation parameter of any object, a new key frame will automatically be produced to store the animation data. Between any two key frames, deformation values will be interpolated from the values of both keys. Analytical Objects, by definition, cannot be deformed by facet based deformation. But any such object included in a deformed hierarchy branch will correctly change its position according to the deformation applied to its master object(s). The same goes for camera and light objects included in an animation branch. The button will deactivate any deformation settings. This is rather useful when you want to use other working modes. For instance, if you want to move or rotate objects, these changes would be calculated for the objects before deformation is recalculated. As it is rather confusing to handle these complex operations, you can switch off the deformation and reactivate it by pressing the button again after you have placed the object(s) precisely where you want them. Deformation Axis Deformation always takes place in respect to a specific axis of the object. For example, if you want to twist an object in relation to its y axis, you will have to select precisely that. But you can always switch to a different axis, because the deformation functions can always be undone by changing the parameters - keep in mind that deformation is part of the animation feature and does not permanently change the object's form. So even the most ridiculous deformation can be undone by deleting the animation, returning your object to its original form. Deformation types Much like the other working modes, deformation offers two different ways to work on the objects: Direct input of deformation parameters as well as real-time deformation by pressing the left mouse button and dragging. You can choose between one of the three deformation types "Bend", "Twist" or "Inflate" by selecting the corresponding tab button in the tool window. You can combine all deformation types, for instance, you can start by twisting the object, then bending and inflating it by the desired amount. Bend Object The "Bend Object" deforms an object along a previously selected axis. The example above shows a long block made up of several segments to form a single object. Deformation is selected for the y-axis in this case. In the "Bend Object" menu, the z-axis has been selected as the additional bending axis. See the deformed result (2nd object from left), where the object has in fact been "grabbed" by both ends of its y-axis and bent around its z-axis. Maximum bending amount is ±1. A value of zero will return the object to its original form. Objects can be bent from either or both of their ends. The screenshot above shows the different results of "grabbing" the object either from both of its ends or only one end. Settings for these functions are made using the three buttons also shown in the screenshot. Twist Object This function is used to twist objects about themselves. Beginning at the left side, this screenshot shows the object in its original form - a solid block. The 2nd object shows the result of twisting by 120 degrees along its y-axis. As before, you can use the buttons to twist the object from both-ends or one end of the axis, with the results being shown by the last two objects in the screenshot. Deformation can be adjusted to any value around ±180 degrees. A value of zero will return the object to its original form. This is a rather impressive example of a torus deformed by the twist function. Inflate Object This function allows you to either inflate or deflate an object. Again, the screenshot shows a simple object shaped like a block. The 2nd object from the left has been inflated by using the "blow-up" function, where the object is deformed much like a balloon. The next object has been deformed using the same function, but set to a scale value lower then 1, thus inverting the deformation curvature. As before, you can use the buttons to deform the object from both-ends or one end of the axis - the result being shown by the last object in the screenshot, where the deformation was made from the upper end of its axis, but without the curved result. Use the "Scale" parameter to define the amount of deformation. A value higher than 1 will inflate the object, while values lower than 1 will result in deflation. A value of zero will return the object to its original form. Object Resolution As the exact shape of an object will always depend on the number of points and facets, you will need a reasonably high object resolution for smoothly curved shapes. Using the button "Object Resolution" will allow you to increase the resolution by splitting the facets, thus increasing the total number of points/facets in an object. This function is in fact identical to the one employed in the "Edit Object" working mode, but working on the complete object rather than selected facets. The screenshot again shows a simple block followed by a version deformed by bending. After using the function "Object Resolution" twice, the object has been finely shaped with the curvature becoming more pronounced with each step (see right). In practice, however, you would try to find a compromise between visual quality and point/facet count, as more facets will inevitably slow down the rendering process. Hierarchical Deformation Deformation will be inheritable throughout a hierarchical object branch, meaning that lower order objects will deform according to the deformation rules applied to higher order objects. The screenshot shows a typical example of using this feature to good effect: Rectangular "arms/legs" and spherical "hands" have been associated in an object branch with the rectangular "torso" of the puppet. The "head" consists of spheres and cylinders and is also integrated into the object branch. Now by selecting the "torso" and deforming it using the "Twist Object" and "Bend Object" functions, the deformations will also work in the lower order objects in the branch. Note that all spheres used in this animation are analytical objects! While analytical objects cannot be deformed (as they are made up of a mid point and radius in this case rather than points and facets), still, the midpoints are shifted according to the deformation applied to the higher order object, so they will in fact follow the deformation instead of staying in the same position. Another special feature is the flexible texture function, which is not aligned with the object-axes, but smoothly follows an object's surface even when it is deformed. See also: Tutorial - Animation and Deformation - Dolphin Movements .topic 230 The camera settings and the various render options are explained here. Camera - Movement and Alignment There are several possibilities to move a camera and align it to a scene Render Options Choose an appropriate rendering algorithms, picture resolution and special effects to determine the output quality of your picture calculation Start Rendering of an Image or Animation How to start or interrupt rendering of an image or animation and how to save pictures and animations Autostereograms A little add-on - autostereograms, better known as magic pictures .topic 46 - Menu "Edit - Camera" The Camera The camera-menu gives you freedom to move interactively in the area and to insert any rotation and inclination angles for the camera. As with a real camera, you can implement Tele-photo or wide-angle settings over a zoom feature. Positioning the camera is very flexible in CyberMotion. On the one hand you can edit directly the coordinates of the camera or move with help of the arrow buttons next to the coordinates along the world-axes or along the object-axes of the camera or even in an orbit about another object. On the other hand - since the camera can be manipulated like every other object - it can also be repositioned directly (in the "Move Object" mode) or rotated to set the viewing-direction (in the "Rotate Object" mode). If you move or rotate the camera object in a viewport window you can see in the camera viewport how the visible camera-detail changes. Furthermore you can change the camera position just by clicking in the camera viewport and moving the mouse, independent from the work mode present. Finally you can arrange the camera under an object hierarchy. Suppose you want an object to fly through an area and the camera should always line up on the flying object. If you link the camera hierarchically under the object, the camera is automatically moved with the flying object when the flying object is moved, scaled or rotated in the respective work menus. Camera Alignment At the top of the tool window are three angle instruments for the alignment of the camera. You can enter the angles via the keyboard or just click into an instrument and drag the pointer to the desired position. Inclination - The left instruments shows the inclination angle (pitch) of the camera. Direction - In the center box is shown the direction angle of the camera. Roll - The right instrument displays the lateral inclination-angle, which is the rotation of the camera about its viewing axis. The dial consists of a crosswise-beam, which has markings at both ends. This works precisely as the dial showing the horizontal level in an aircraft. If you roll the camera to the left or right along its axis the horizon rolls in exactly the opposite direction. Rolling through 180 degrees turns the camera upside-down and the markings on the beam point to the underside. Step - This parameter defines the step-width, with which the camera is rotated or moved along its axes when rotating or moving the camera via the arrow buttons next to the edit fields. Camera Position The current location of the camera is shown in the "Camera Movement" box. The coordinates can also be input directly from the keyboard by choosing the X, Y, Z-buttons. Moving the Camera using the Coordinate Arrow Buttons You can use the arrow buttons next to the camera coordinates to move the camera throughout the area. Operate the buttons beside the X-coordinate, and the camera moves along the spatial X axis. The same is true for the other buttons. The step-width, with which you move and rotate the camera, can be put in through the Step parameter to suit your requirements. It would be tedious if you could only move along the world-axes. Quite often you must move around an object or in the viewing direction of the camera to get the best viewpoint and it would be very time-consuming to trace this movement along the world-axes. Therefore, in the "Camera Movement" box there are three buttons with which you can determine the movement-type: movement along world-axes: If the button is selected, you can move along the X, Y, Z-world-axes - up, down, forwards, backwards, right or left. movement along camera axes: If the button is selected, then movement is executed along the camera axes. For instance, if you have previously positioned your camera so that it is inclined to the left at a certain angle and you now operate the movement-arrow for the movement to the left, the camera - moving on the camera-axis inclined to the side - therefore moves to the lower-left. The same is true for all other movement directions. If, for example, you turn the camera on its axes to line it up on an object, then the camera automatically points in the direction of the object. Therefore, in camera-axes mode you need only operate the arrow button for movement along the Z-axis and the camera moves directly toward or away from the object. circular movement about an object: Select the button if you want to move in an orbit about an object. The relevant object must previously have been marked in the Select Object dialog. If no object has been marked the camera moves in an orbit about the world center. Choosing the X-arrow buttons causes a horizontal circular-movement of the camera. Similarly, choosing the Y-arrow button causes a vertical circular movement, during which the camera lens always remains lined up on the object. Moving the Camera Directly in the Camera Viewport Window By clicking in the camera viewport window and moving the mouse you can easily change the camera position independent from the work mode present. The movement direction depends again from the selection made in the camera menu. If mode is active then the camera moves on the x- and y-planes. Pressing down the -key additionally will move the camera to the front or the rear along the z- world-axis. In mode the camera moves along the camera axes and if circular movement is set than the camera moves in an orbit about an object. Zoom The "Zoom" parameter you can directly influences the "width of focus" of your camera. By increasing or reducing the "zoom" parameter you can implement wide-angle or telephoto-effects respectively. As with a real camera, there is perspective distortion of the picture. Lining Up the Camera The "camera" button lines up the camera on marked objects, turning the camera so that it points at the center of the objects. Center the Camera By choosing the "camera"
button the camera leaps directly to the center of the selected objects. If you are then within a closed body it is advisable to move it a little way from the object. .topic 47 - Menu "Options - Render Options" - Short Cut: + "R". In this dialog you can plan picture resolution and settings relevant to the realism of the scene. Render Quality Choose between three rendering algorithms to determine the output quality of your picture calculation Scanline - Low quality but good for fast animation preview renderings Raytracing - High quality, real reflections, transparencies with refraction and shadows Global Illumination - Raytracing and Photon Mapping - With the photon mapping technology the diffuse indirect illumination in a scene and caustic light reflections can be included in the calculation of the picture. Picture Resolution Select a standard picture format or specify your own resolution Field Rendering Field Rendering for interlaced video Shading Mode Several shading options Render Options Use this global buttons to switch on or off special effects Lens Flares Particle Systems Bones Starfield Object Halos Volumetric Spotlight Depth of Focus Sparkle .topic 380 In CyberMotion you can choose between three rendering algorithms for depiction, which have big differences in the underlying technology and picture quality. The scanline method is by far the simplest method and best suited for fast animation previews. Raytracing offers high quality and can simulate real world phenomena like mirror reflections, refractions in transparent objects and shadows. Above all stands the Global Illumination algorithm. The implemented photon mapping technology combines the pros of raytracing - reflection and refraction - with the ability to render also the indirect illumination caused by diffuse reflections in the scene. Scanline Raytracing Global Illumination - Raytracing and Photon Mapping Scanline The scanline algorithm scans the monitor screen line by line. Each line corresponds to a so-called scanline. These scanlines are now compared with the coordinates of the individual facets projected onto the screen and any intersections are calculated. If two facets overlap at the intersection-lines on the screen, the distance from the camera of each point of each line is calculated. The point closest to the camera is then drawn. This algorithm enables objects that overlap each other to be represented correctly. This procedure renders very rapidly in comparison with the raytracing algorithm. However, certain photo-realistic effects, such as true reflections, shadows or refractions through transparent objects cannot be rendered using this procedure. Transparency will be correctly depicted but refractions, e.g. the magnification of a magnifying glass, can only be calculated with Raytracing. Raytracing In nature, a light-ray that leaves a light-source, reflects off different objects and then falls, sometime later, into the eye. Raytracing is exactly the reverse process. A "viewing ray" is sent out from the camera-viewpoint, through a projection-plane (the screen) and then tested for an intersection with an object in the area. If an intersection occurs the relevant pixel of the screen - at the point where the viewing ray passes through the projection-plane - can, therefore, be drawn in the calculated surface-intensity of the object. The viewing ray can also be followed further, however. At a mirroring object the angle of reflections is simply calculated and a new ray started searching. The surface intensity for the first object met then comes recursively from the incident light in addition to the light reflected from the other objects that are met. With this algorithm it is also possible to calculate the split in the viewing rays that is necessary with transparency. In order to render transparency realistically the surface-reflections are calculated from reflections in addition to the intensity of the light coming from behind. A mirror-ray and simultaneously a split viewing ray is calculated. If these viewing rays meet another transparent or mirror object, the whole process is started again, so the color for rendering only a single screen-point results from a whole collection of viewing rays. The recursion levels for reflections and split rays can be entered separately to keep the rendering time within reasonable limits. Raytracing - Parameters Almost all settings in this box only influence the depiction of the picture in raytracing mode. The only exception is the -button. With it you can switch transparency on or off for depiction in Scanline Mode too. Shadow - Select this button if you wish the picture to include shadows in the rendering. When calculating the light intensity for a surface, then first a light ray from the surface point to the light source is checked to see if there are any objects - or the object's own facets - between the light-source and the point being rendered. If there is, the calculation is broken off for this particular light source. If a transparent object casts a shadow then the light color is filtered by the objects surface color. Rendering time increases greatly with each light source, especially if soft shadows with multiple shadow sensors are calculated. Therefore, you can decide for each individual object if it does cast a shadow by using the function in the material dialog. For convex objects , also switch on the button in the material dialog to reduce rendering times (objects with a convex body shape do not cast shadows on their own surfaces). Furthermore, the shadow-calculation can be switched off for each light-source using the option in the light dialog. Multiple Shadow-Sensors - Switch on this option if you want to generate a soft-shadow effect. Standard light sources like the lamp, sun and spotlight are defined with a specific radius. With it, instead of originating from a single point-light-source, the light comes from a spherical area light-source. For each light type you can enter a number of shadow-sensors in the light dialog. If this value is 1, a completely normal hard-edged shadow is rendered. If you use a value greater than 1 (say 21 for example), that number of shadow-sensors are used to scan the area of the spherical light area to determine, how much of the light-sphere is hidden by other objects. The results of the shadow-sensor evaluations are interpolated so a soft shadow is rendered. If the radius of a light source is very high and the number of shadow-sensors low, for example, it produces the effect of several light-sources standing close to each other. The greater the number of shadow-sensors and the less the radius then the better will be the soft shadow effect. The number of additionally calculated shadow-sensors should be kept as low as possible due to the rendering time required. In the forgoing example-picture the shadow was rendered with 9 shadow-sensors and a sun-radius of 40. Reflection - Surface reflections are included in the picture calculation. 1 to 25 reflections can be entered, which determines the maximum number of retraces in the scene. However, in most cases a mirror-depth of 1-2 is sufficient. Too many reflections can produce quite a confusing reflected image (especially at low resolutions) and causes the rendering time to climb steeply. The same is true for the transparency effect described following. Transparency - Transparent objects will be calculated with real refractions. The maximum number of retraces can be entered from 1-25 - as with the reflections. You can render transparencies even in scanline mode, but the depiction differs entirely from the results you achieve with raytracing. By tracing of refracted rays in the raytracing mode you can display a physically correct representation of refraction in transparent materials, whereas in scanline mode only a simple check is carried out to determine whether a point lies behind the transparent point. If it does, this point is simply filtered through the transparent object. Antialiasing - A low-screen resolution of, say, 320 x 200 pixels-points often produce an undesirable step effect. A line running diagonally looks rather more like a stairway than a straight, clean line. Antialiasing smoothes such effects. With it, each screen pixel to be rendered is divided into several smaller sub-pixels and a search ray is calculated for each sub-pixel. An average intensity for the screen-point is then calculated from the intensities of all sub-pixels. Steps that depart slightly from the actual line are now plotted using the averaged intensity and altogether the impression of a somewhat blurred but more representative line is produced. The quality and with it the number of sub-pixels to be calculated can be set from 1-4: 1 = 4 sub-pixels 2 = 9 sub-pixels 3 = 16 sub-pixels 4 = 25 sub-pixels Antialiasing is only applied to certain pixels so the rendering time does not also rise by 4, 9, 16 or even 25- times because so many more pixels are calculated. It is only those pixels that deviate in their color-value by a threshold value from that of their neighbors that are subdivided into smaller pixels. This threshold value can be set via the corresponding parameter in the raytracing box. It is also possible to set this value to zero, thus every single pixel will be subdivided into sub-pixels and rendering time increases considerably. This should only be applied for a last high quality rendering, for instance, when severe antialiasing effects arise. Don't forget to reset the value afterwards to a common value about 0.8. Negative Antialiasing - A negative value (-1) can also be input for antialiasing. In contrast to calculating additional intermediate pixels for positive antialiasing, negative antialiasing departs from this process. Instead of more pixels, only half the pixels to represent the rendering are calculated. The color-values for those pixels that are not calculated are interpolated from the surrounding pixels, so that halving the calculation resolution does not reduce the resolution too severely. Advantage: Up to 200% speed-gain. In this way it allows a lot of quick control-pictures to be calculated and the alignment of shadows, reflections and illumination to be checked. Global Illumination - Raytracing + Photon Mapping Raytracing is a standard for high picture quality and for realistic reflections and refraction. One of the major drawbacks in a general raytracing implementation is that it does not take into account the indirect illumination - the light that is reflected from other objects in the scene other than the direct light from a light source. Usually a constant light intensity can be defined to simulate this indirect lighting but that is a very poor approximation. Escpecially in architectural scenes the illumination in a room is dominated by indirect light reflected many times from the diffuse sufaces in a building. With the photon mapping algorithm, CyberMotion provides a global illumination model that combines the pros of raytracing - reflection and refraction - with the ability to render also the indirect illumination caused by diffuse reflections in the scene. Rendering a picture with photon mapping is a two pass procedure. In a preliminary run little packages of energy (photons) are emitted from the light sources in the scene. Similar to ordinary raytracing the path of this photons is traced through the scene and the distribution of photons is saved in a three-dimensional data structure called the photon map. After the photon map has been calculated the picture is rendered in an ordinary raytracing run and the photon map is evaluated when calculating the incoming light intensity for a point. There are more advantages of photon mapping: Color bleeding - for instance, when a green wall casts greenish reflections on a neighbouring white wall. Caustics - Caustics are light reflecions from highly specular surfaces or, e.g., the light gathered in a focal point after transmission through a glass lense. Example for caustic light reflections beneath a little glass figurine For a deeper understanding of photon mapping see also: Photon Mapping - Introduction and examples Light Dialog - Photon Emission Parameters Material Dialog - Photon mapping object properties The Photon Mapping Parameters for the Evaluation Process in the Raytracing Pass You can make use of photon mapping in two different ways: Only Indirect Illumination - Only the diffuse indirect light that has been at least once reflected in the scene is evaluated from the photon map. Then the incident light coming directly from the light source is calculated and combined with the indirect light. Global Illumination - All the illumination in the scene is calculated by evaluating the photon map only. You can read about the pros and contras in the chapter - Photon Mapping - Introduction and Examples. Static Photon Map in Animation - The photon map will be calculated only once at the beginning of an animation. You can use this option to animate a fly through an architectural scene where the objects themselves do not move. It is also possible to exclude individual objects from the photon mapping process - those will be illuminated only with direct lighting - and then you can even render animations with a static photon map and moving objects, e.g. a car driving through a street lined with houses. Photon Pool - For a good estimation of the incoming irradiance for a point on a surface a specific number of photons scattered around that point have to be gathered in the so called photon pool. The maximum number of photons to search for the photon pool determines the quality, sharpness and of course the rendering time. You should take into account that searching through a photon map that might hold millions of photon entries is not really a trivial thing. Appropriate values for the photon pool size lie between 250 and 800 photons when photon mapping is applied only for the indirect lighting, and 500 up to 5000 photons, when Global Illumination mode is activated with photon maps containing several million photons. As a rule you can say, the more photons in the photon map the more photons should be gathered in the photon pool, otherwise the pictures become a somewhat spotty appearance. For examples refer to the chapter - Photon Mapping - Introduction and Examples. Caustics - Light Reflections - Includes the calculation of caustic light reflections in the photon mapping process. Caustic photons are saved in a separate photon map, the so-called caustic map. This is due to the varying demands for global photons (soft indirect light) and caustic photons (sharper contours created by specular reflections and transmission). Caustics Pool - For the caustics pool you do not need as much photons as for the global photon pool. Caustic reflections often have sharp contours (e.g. light transmitted through a lens and focused in a sharp focal point) and with to much photons the caustic reflections would become to blurry. Good values lie between 80 and 250 photons, depending on the number of photons contained in the caustic map. For more examples see also: Photon Mapping - Introduction and Examples Light dialog - Photon Emission Parameters .topic 400 The shading options have also an effect on the rendering quality: Constant - Using a constant tone, for each facet only the color-intensity at its center is calculated and this information is used to fill the rest of the face with the same color. This is the simplest and by far fastest type of calculation and depiction. (Only for scanline depiction, e.g. for very fast animation previews.) Final - With this tone-type an individual intensity is calculated for each point on the object's surface. When using the raytracing or photon mapping algorithm the final tone is automatically applied. Texture - If this button is selected the object's material textures (procedural or bitmap textures) that have been defined in the materials dialog are included on rendering. For fast previews you can, therefore, switch the textures on or off, independent of whether the texture for the object is switched on in the material dialog. Interpolate Facets - If you switch on the facet interpolation, then for all objects owning the material attribute "Interpolation" the shading of the surface is smoothed. Highlights - All objects that are indicated as having a shiny and reflective surface in the material dialog are, on intensity-calculation, rendered with highlight-reflections from light sources. .topic 390 In this box you determine the resolution at which the picture is calculated, independent of the display window. This permits calculation to any resolutions from 32 x 32 to 6000 x 6000 pixels. Please remember that technical complexity is not necessarily very meaningful. An extremely complex picture that at a resolution of 640* 480 pixels requires a rendering time of, say, 10 minutes, will, at a resolution of 6000* 6000 points - corresponding to 117 times the number of pixels - require almost 20 hours rendering time. The same is also true for the required storage area. A picture of 640* 480 pixels with all effects switched on needs 2.5 MB while rendering for the picture and effect-buffers (maximum 9 byte per picture-point, if all effects like light reflections and depth of focus are switched on). At a resolution of 6000* 6000 pixels would require memory space of about 300 MB. You can choose from some preset standard resolutions in the list box or the resolution can be changed to meet specific requirements at any time through the keyboard. .topic 670 In regular TV interlaced video is used. An interlaced video picture contains two fields of picture information shot at different times. In the first shot the picture information is saved in all odd numbered scanlines (1,3,5...) and in the second shot all even numbered scanlines (2,4,6...) are saved in the same video frame. When playing this video on TV, both fields are played in succession to produce the interlaced TV-picture. So, when watching television you always see only one half the strips of a picture - it's the playing frequency and the luminous characteristics of a television screen that gives the impression of full frame pictures. If you plan to play your CyberMotion animations on TV you can now switch on Field Rendering for AVI-output, too. Since twice as many pictures (each of half resolution) are rendered, Field Rendering gives smoother motion and can even reduce or eliminate the need to render motion blur - which can save rendering time. But this applies only to TV-output, Field Rendering is not suited for output on computer monitors! Example: The illustration shows a meteor crossing the picture from the left to the right. In the first picture only the odd numbered scanlines are rendered. Then - a moment later in time - a second picture is rendered containing only the even scanlines. Now both pictures are interlaced and saved as one single picture (illustration on the right). When playing the film on a TV, the picture information is separated again by the hardware and both fields are played in succession again, to provide the smooth movements for the television screen. Sometimes, video cards demand the reverse order of field data. Read your video card directions and switch from "Lower Field First" rendering to "Upper Field First", if your card displays the even scanlines before the odd scanlines. Field Rendering and Scene Previews With the "Render Scene Preview" function you can render fast preview animations that use the same depiction mode as set for the camera viewport. These preview animations can be saved in AVI-video files in the same way as final animation films. You can also activate the field rendering output for these preview films, if you select the "Field Rendering for Scene Previews" button. Compressing Videos CybeMotion films are always saved as a 24 bit AVI (high quality, uncompressed) video file. These uncompressed (and therefore without quality loss) video files can become very large but can easily be converted to compressed video files using third party (free- or shareware) converter programs. You should always try a variety of different compression-algorithms/encoders before deleting the original uncompressed file because output quality and compression rates differ widely. Compressing a video containing interlaced field data is even more complex, since the compression algorithm has to take into account, that each picture is composed of two timely shifted half pictures. .topic 410 Global switches for additional rendering effects are found on the right side of the render options dialog. Lens effects If you take a picture with a real camera it can be flawed by camera-optics. If, for example, a light-source shines directly into a camera, several reflections can be produced within the lens. This is responsible for the well-known star, circle, or annular light reflections in the pictures. The photo and film-industry, however, do not always view this effect with pleasure and go to considerable length to avoid it. On other situations, however, these effects are deliberately aimed for and highlighted. In the meantime, every movie fan has become so accustomed to these picture faults that, by using these effects, you can considerably increase the degree of reality in computer-generated pictures. You can determine if and how a light source produces lens reflections with a great number of different parameters for each individual light source in the Light dialog. Here, in the picture-parameter dialog, the calculation of these light effects can only be switched on or off globally with the button. Particle-Systems If you have defined particle actions in the particle-editor, you can specify with the button, if the particles are calculated for (Preview-) animations or not. Bones Usually the bones of a skeleton are only visible in the viewport windows - in the final rendering skeletons are hidden automatically. However, if you want to include bones also in the final rendering of a film, then just switch on the button in the Render Options dialog. Starfield Switching on this option includes a starry sky in the picture calculations. This starfield can be animated or even combined with other background models (e.g. a starry sky filtered through a cloud cover). The adjustments for the starfield are defined in the starfield-editor. Object Halo With the button you can switch off or on the halo effect for all objects simultaneously. This effect creates a halo of light enveloping the outline of an object and can be used to simulate atmospheric halos around planets or, for instance, to create swirling swarms of glowing particles. The parameters for object halo are edited in the material editor. Volumetric Spotlight Switch on the option to render a realistic light-cone (not available in scanline-mode). Each spotlight that has been switched on then throws a cone of light in which spotlight-parameters (such as edge-interpolation and distance-dependent intensity sensitivity) are taken into consideration. Shadows from objects penetrating the cones, and filtering of the spot cone light through transparent objects are also possible, although very calculation-intensive. In order to render these silhouettes in the visible light-cone, they are referred to in a volume rendering process that senses the inside of the light-cone and - depending on the accuracy required - requires a large amount of calculation time. There are two parameters with which you can determine the intensity and the accuracy of the volume-calculation of the cone: Diffuse Reflection: A visible light-cone originates if there are particles that reflect the light of the spotlamp - such as, for example, dust or smoke in the area. With the parameter you determine the reflectivity of these imaginary particles, and thus the brightness, with which the light-cone is calculated. Resolution: This value defines the accuracy of the light-cone scan. If no objects are within the light-cone (so no complicated shadows need to be calculated) a value of 0.10- 0.20 is entirely sufficient. However, the value should be as high as possible with shadow-calculations. The value also depends somewhat on the spread-angle of the spotlight. A wide spread signifies a wide spot cone so the scan rate must also be higher if you want to obtain sensible results. The narrower the spot cone, the lower can be the resolution parameter chosen. In order to save calculation time, the button can be switched on in the light dialog for all spots that do not throw a shadow. Example: The illustration shows a simple house wall with 2 windows, behind which is a spotlight source with visible light-cones. A colored bitmap with transparency has been projected onto the transparent panes, so that the four windowpanes filter the light falling through them by different amounts. Depth of Focus Depth of focus is another effect that is used in film and photography to accentuate areas of a picture. With depth of focus switched on only picture-regions at a specific distance are sharp, and picture-regions that lie nearer or further away become increasingly more blurred. Distance of Focus: With this parameter you can specify the precise distance at which the picture is sharpest. If, for example, a value of 1000 units is put in here, an object that is at this precise distance from the camera is represented with completely sharpness. Objects that are before or behind this object are represented less sharply with increasing distance from this point. Range of Focus: With this parameter you can decide how great the area of focus will be, with respect to how rapidly the sharpness falls off with increased distance from the focus point. With a very small value (minimum 1) only a very small area about the focus is sharply represented and the sharpness falls off rapidly. With a greater value you can increase the area of sharpness and reduce the rate at which the sharpness falls off. Focus on Reference Object - It is not necessary to work out the relationship between the camera and an object in order to determine the precise focal distance. Simply mark the object you want to focus as a reference-object in the viewport window and then, back in the Render Options dialog again, operate the button. The distance of the camera from the reference-object is automatically calculated and noted. Animated Depth of Focus - If you switch on the button then the program calculates automatically the focal distance to a reference object that you can choose in an object selection box, which appears when you select the button next to the button. In each frame of an animation this reference object will be used to calculate the sharpness distance. A Group Object is best suited for this purpose, since Group Objects are only drawn in the viewport windows - in the final rendering they are always hidden. Example: In the left picture the sphere lying at the back was chosen as the focus, while in the right picture the sphere at the front was the focus. Sparkle Sparkles are star-like lens reflections that appear if light reflects from a shiny object into the lens of a camera. For this, the object must be shiny, and at the same time sparkle must be globally switched on in the rendering dialog. If you switch on the sparkle option, you can render sparkle dancing on waves as shown in the above picture. The following parameters can be inserted: Size: The basic size of the sparkles can be entered - the size at which they are rendered, however, is dependant on the distance and also intensity of the shine. Threshold: Here you can set a threshold value between 0.1 and 1 that corresponds to the minimum intensity of the light that must be reflected from an object before sparkle is calculated. .topic 72 There are four entries under in the menu bar to start a calculation of an image or an animation. For fast access four buttons corresponding to these functions can be found in the button strip directly above the viewports. You can start picture calculations at any time in each work mode simply by pressing one of these buttons or by operating the corresponding shortcuts, depicted in the button images. Render Scene (Short cut 'S') The button causes a re-draw of the scene in the render window and does not differ from the output in the camera viewport, except that now the size of the rendered picture is adapted to that entered in the Render Options dialog. The picture size can be independent of the maximum screen-resolution and independent of your current viewport size. However, if you render a picture in which the length/ height relationship differs from the length/ height relationship of the viewport, details of the picture rendered for the file are automatically somewhat different from than those shown in the viewport. Render Final (short cut 'F') When you operate the button your picture is calculated in scanline mode or with the high quality raytracing-algorithm with regard to the settings planned in the material-, light- and render options dialog. Depending on the hardware available, the complexity of the scene and the depiction mode used, the calculation of the picture can take from a few seconds to several hours. Render Scene Animation (short cut + 'S') Corresponding to , but this time a whole animation is calculated. An entire animation with many individual pictures and a complex scene can conceivably require several hours to days to render, depending on the speed of your computer and the rendering parameters (resolution, antialiasing, reflection etc). You can try out the settings for the materials and the background lighting simply by rendering a control-picture. What a waste of time it is if, after a day of animation rendering, you find that the chosen camera or object movement is not what you anticipated. A simple preview rendering would have been helpful here. Render Final Animation (short cut + 'F') The final rendering process in scanline or raytracing mode for a whole animation is started. The Render Window The rendering is made in an external render window with its own menu bar. Pictures can be saved after rendering with help of the menu function "File - Save Picture as..." (as .bmp, .jpg, .png, .pcx or .tga files) - animations can be saved via the corresponding menu entries as AVI-videos or picture sequences. After completing the rendering of an animation the calculated video will automatically start to play in the render window. You can even pick out individual pictures of the film by stopping the video at the required points and then saving the window content again via "Save Picture as....". A new picture calculation overwrites the last rendered picture but not the last rendered animation. The menu entry "File - Show animation" will get back the animation in the render window. Only a new animation calculation will overwrite the old one. The render window in action - from the left to the right you can see in the status bar: render time | approximated remaining time | frame number | and the progress bar for the current rendered picture. Interrupting Picture Rendering You can interrupt the rendering process by pressing the "ESC" button, via the menu function "File - Stop Rendering" or just by closing the render window. Show Last Rendered Picture/Animation When the render window has been closed, minimized or simply lies below the main window you can restore the render window and its content simply by selecting in the button strip or choosing the "File - Show last rendered picture/animation" entry in the menu bar (of course minimized windows can be also restored by selecting the corresponding button in the Windows task bar). To bring the main window to top again, just press the "ESC"-key. .topic 49 Choose the option "Autostereogram" entry in the menu bar and a dialog appears which enables you to create real Autostereograms. All pictures generated with CyberMotion are, in fact, created on a three-dimensional basis but become just two-dimensional illustrations. To really see a three-dimensional picture, two separate pictures must exist - one for the left eye and one for the right. Conventional methods are primarily two separate pictures in stereoscopic, virtual reality helmets with two built-in screens, or the calculation of red-green pictures, with which filter glasses for the left and right eye individually filter out the red and green portion, so that two separate pictures are formed again and the 3D impression is obtained. However, there is also a method with which you can obtain a real three dimensional impression from a two-dimensional picture-presentation - so-called Autostereograms. In an Autostereogram the pixels required for the "stereo-views" for both eyes are combined within one picture. The Autostereogram Rendering of an Autostereogram proceeds as follows: Instead of only rendering a "picture-point", the depth-information of the scene is also used, and two horizontally-separated picture-points are rendered for every screen depth-value. These two points are given the same color-value. The three-dimensional effect for the point originates provided that the left eye observes the left picture-point and the right eye the right. There are two possibilities you can consider enabling you to see the points truly separately with both eyes: By deliberately unfocusing the eyes (looking through the picture into the distance) the scene resolves and both eyes see specific points. The same can also be obtained by squinting. However, the depth information is inverted by this method. Both viewing methods are not accessed without practice. Some tips that should enable access to the "concealed" picture-information follow at the end of the chapter. The Peculiarities of an Autostereogram The raytracing algorithm is used to determine the depth values of the picture. It specifies this time, however, only the depth-value of each picture screen-pixel for the scene. The depth value is computed first for both eye-points. To get the three-dimensional impression these must both be set to the same color. Here the eye-points for the left and right eyes lie on the left or right respectively of the relevant screen depth-point and, on rendering, the pixel colors do not refer to material-colors or to any effects like reflections, shadows etc. They serve no purpose, because the resultant picture is split on rendering anyway. Instead, you can choose if a random color or a texture is used here. Despite raytracing, a very much higher speed is reached here on rendering by relinquishing all effects. Resolution The "depth-resolution" of the picture is very dependent on the screen or printer resolution in dpi. dpi = dots per inch (points per inch), 1 inch = 2.54cm. This resolution indicates how many points per 2.54 cm your printer can show and is used in the picture rendering. It is important to know the resolution the printer regarding the pixel spacing in order to calculate both rendered eye-points. Fixing eye and projection-plane spacing enables the pixel spacing for left and right eye pixel to be determined exactly. It follows from this that you must have previously specified the resolution at which you will render the picture. For example, if the picture is rendered to be viewed on the screen with 70 dpi, you could not print it out later with 300 dpi, because then the eye-spacing set for the points will no longer be correct. On the other hand you would not be able to recognize the depth information of a picture on the screen if it had been calculated for a denser resolution of 300 dpi - because the screen resolution simple could not represent it. Autostereograms are expensive on storage space. As already mentioned above, the depth-resolution is dependent on the print resolution of the printer. Monitor resolutions of about 70 dpi have only very low depth-resolution and it is noticeable that the scene appears as a sort of layered model - a sphere, for example, appears to be formed from a number of thick disks. If higher resolutions are used (for example 300 dpi) the depth resolution is good. However, there are again the consequences normally associated with high resolutions. A resolution of around 3500* 2500 picture-points is required to render a DIN A4 picture at 300 dpi, for example. Autostereograms are rendered as True Color-pictures, which signifies a storage area of around 25 MB (uncompressed TIF) at this resolution. When composing the scene, you should also consider that only limited depth-resolution is represented. If, for example, you set up objects arranged over a great range of depths, the spacing of the depth-steps can be so high that details of objects can no longer be recognized. The objects to be represented, therefore, should be quite compact and not far from each other. The Dialog Resolution You can determine the resolution for distribution, at which the picture is to be rendered in dpi, using the "Resolution" parameter. The value for a normal 14" screen at 640 x 400 picture-points should amount to approximately 65-70 dpi. You can determine the precise value by the following: Visible screen-width in cm. divided by 2.54 gives the result in inches. Divide the screen-resolution in pixels (only horizontal resolution is important) by the value calculated. Example: 2.54 cm x 640 pixel horizontal resolution = ca. 68 dpi 24 cm screen width Depth The object-depth of the scene is automatically adapted and scaled to the depth-resolution. You can, however, obtain more precise variation of the depth-resolution with the Depth parameter. The greater the value, the deeper the picture appears to be. Values between 0.30 and 0.40 seem to produce the best pictures. At greater values it is no longer easy to get a completely sharp picture. Instead, you have to attempt to dive downwards into the depths with individual objects and continually have to adjust your eyes for each depth. Random Colors and Textures With the "patterns" selection box you can decide to use random colors or a texture for coloring of the individual picture pixels. Random Colors: If you use the random-color algorithm to color the individual picture pixels, an incoherent picture is rendered with no recognizable pattern. The picture appears on first glance to be entirely blank, as neither texture nor any other structures give a hint that it hides a complete three-dimensional picture. The absence of recognizable structures has, however, one further advantage - the eyes are not distracted by the structure of the texture, therefore when viewing an Autostereogram it makes it much easier for "beginners" to unfocus their eyes. When beginning for the first time, therefore, on each occasion generate an Autostereogram using random colors. Instead of random colors you can produce an Autostereogram using random gray-scale or monochrome. For those that do not display their pictures and also do not have a high-quality color-printer, the monochrome mode is certainly relevant, because the depth-information of the picture still remains clearly recognizable in the monochrome-mode with only black and white points. The picture-file, however, is always saved as 24 bit TRUE COLOR data. Textures: Textures can also be used for the color-determination of pixel-pairs. Simply choose the entry in the "Patterns" popup dialog. Peculiarity of textures: Depth perception is based on pixel-pairs of the same color. If, however, you now get two pixel-pairs of the same color value lying side by side, then the picture loses its information value. From this it follows that the texture should be very detailed and variable in the horizontal direction. This is required only for the horizontal resolution, however (in the vertical direction there are no such limitations). Taking it to the extreme, you could even use a texture that was composed of nothing but colored vertical lines. The application of textures makes a further special circumstance available. A constant value (E/2) for the depth-value calculated for the pixel spacing for pixel-pairs of steps "completely at the back". When the texture is scaled at this value the completed picture appears to be regularly tiled. (At least for the areas that lie entirely at the back). CyberMotion, however, again offers you the possibility of trying out any combination: Tiles: The textures for scaled tiles are based on the above-mentioned spacing. You can, however, also eliminate the tiling and use entire pictures as textures. However, a breakup of the picture then occurs through the effect of the pixel-pairs and their calculated eye spacing intervals. Nevertheless it can result in interesting effects. A project you can try out sometimes, is to first render a completely normal raytraced picture of your scene (i.e. wealth of detail output) and then use this picture in a second operation as the texture from which to render an Autostereogram If you have switched on tiles, then you can still choose from under following scaling-types in the "tile-width" selection box: 1: 1: The tile-texture is used in its original state. X = E/ 2: The tile-texture is scaled horizontally in the method described above, so that it results in a regular tiling of the whole picture in horizontal direction. X, Y = E/ 2: The standard: The texture is scaled horizontally and vertically, in the above mentioned method. Bitmapfile In order to use a bitmapfile as a texture-pattern, proceed in exactly the same way as when selecting bitmapfiles for object-textures. Here again, the bitmap files used must be available in one of the 4 pre-defined paths. You need only choose the "Bitmapfile" field and select the file from the file selection box that appears. Start Rendering Picture or animation rendering is started directly from the dialog. Once you have set things up you can operate the "Render" or button and insert the picture or video file format and path in the file selection box that appears. Everything else then operates exactly as when rendering a normal raytracing. In the Render Options dialog the settings for the picture resolution in pixels and the "Control Picture" button are relevant - all other settings are ignored when rendering the Autostereogram Hints on Looking at Autostereograms Example: A little bycicle is hidden here Some eye-gymnastics can be necessary to see Autostereograms. Try the following exercise: Holds your forefinger at reading distance and fix your eyes on it. The forefinger represents the Autostereogram. It appears sharp because your eyes are focused on it. However, you cannot see the forms on the Autostereogram. Now look past your finger at a far distant object. If you focus on this object, your finger becomes blurred and is seen in double. Depending on the distance of the object you are looking at, your eyes look further or less apart. The same is true for the blurred views of your finger. It is precisely through this eye-movement that you can focus your eyes on the pixel-pair of an Autostereogram and recognize the latent depth-information. The speed with which you learn this process depends entirely on the individual. Many find the hidden picture at once, others need several minutes for the first time until they have the knack, and a very small percentage of people cannot develop it at all. In each case you loose everything if you try too hard, because if you try to look for the objects and force the picture to appear, inevitable you will not relax the eye muscles (which is required for separating the eyes). An interesting effect is the transition, as the picture appears slowly and blurred and then suddenly becomes sharp. The outlines of the picture are first recognized then the eye is automatically drawn into the picture - as it views a normal picture. Tips At first it is easier to view pictures with incoherent random colors, as the eyes are not distracted by textures, which you normally instinctive and automatically focus on. First hold the picture right at the tip of your nose and look "through" the picture. Then try to maintain this eye spacing while you slowly and steadily move the picture further away. Sometimes light-reflections appear in your picture or presentation screen so try to focus on them - this will also enable you to obtain the effect. Many people find it is easier to squint than to relax their eyes to achieve the spacing. The depth-information can also be found by squinting, however the depth-information is reversed by it (from a consensus it is hollow). Furthermore, consciously remaining cockeyed is quite tiring and can rapidly lead to headaches. .topic 600 All about global illumination using photon mapping Photon Mapping - Introduction and examples Global illumination - the simulation of all reflections of light in a model Raytracing Photon Mapping and Raytracing The Photon Map - a Data Structure Representing a 3D-Light Map The Photon Pool - Evaluation of the Photon Map Photon Mapping for Indirect or Global Illumination Photon Mapping and Landscapes Caustics Light Reflections Excluding Objects or a Light Source from the Photon Mapping Process Speed Up Rendering Using Static Photon Maps in Animations Overview - where do I adjust which parameters? Render Options Choose the rendering algorithm - photon mapping only for indirect illumination or as a global illumination model Static photon map in animation Number of photons to gather for the global photon pool and the caustics pool Global photon pool and caustics pool - number of photons to gather for averaging the illumination in a surface point Light Dialog - Photon Emission Parameters Exclude light source from photon mapping - use only direct light instead Number of photons a light source emits Photon intensity correction Area lights and emission direction of facets Material Dialog - Object Properties Caustics - aim additional photons at objects that cast caustic reflections Exclude object from photon mapping - illuminate directly instead .topic 520 The physically based simulation of all light distribution in a virtual 3D-model is called global illumination. A global illumination algorithm should take into account all interactions of light with the different surface materials in a scene. That sounds alright in theory, but the nature of light - having both the properties of a electromagnetic wave and a particle, is much to complex to include all phenomena in a correct physical simulation. Anyway, what counts is only the individual perception of light, since we can only "see" a limited spectrum of light and the interpretation of what we see is a mainly physiological sensation. The global illumination algorithm implemented in CyberMotion is based on a combination of photon mapping with conventional raytracing. So, let's have a look at the raytracing algorithm first. Raytracing In nature, a light-ray that leaves a light-source, reflects off different objects and then falls, sometime later, into the eye. Raytracing is exactly the reverse process. A "viewing ray" is sent out from the camera-viewpoint, through a projection-plane (the screen) and then tested for an intersection with an object in the area. If an intersection occurs the relevant pixel of the screen - at the point where the viewing ray passes through the projection-plane - can, therefore, be drawn in the calculated surface-intensity of the object. The viewing ray can also be followed further, however. At a mirroring object the angle of reflections is simply calculated and a new ray started searching. The surface intensity for the first object met then comes recursively from the incident light in addition to the light reflected from the other objects that are met. With this algorithm it is also possible to calculate the split in the viewing rays that is necessary with transparency. In order to render transparency realistically the surface-reflections are calculated from reflections in addition to the intensity of the light coming from behind. A mirror-ray and simultaneously a split viewing ray is calculated. If these viewing rays meet another transparent or mirror object, the whole process is started again, so the color for rendering only a single screen-point results from a whole collection of viewing rays. Direct and Indirect Illumination Raytracing is a standard for high picture quality and for realistic reflections and refraction. One of the major drawbacks in a general raytracing implementation is that it does not take into account the indirect illumination - the light that is reflected from other objects in the scene other than the direct light from a light source. Direct light is coming in directly from the light source, indirect light is reflected at least once from a surface in the scene. Direct light and surface brightness - In a conventional illumination model the light intensity coming directly from the light source is determined by calculating the light incidence angle between the light ray and the surface normal. This is of course only a geometrical approach, but it has produced good results for years in the computer graphics industry. Raytracing and Indirect Illumination Usually in 3D programs a constant light intensity can be specified to simulate the indirect illumination in non global illumination models. In CyberMotion this constant general area brightness is defined with the light object "Ambient". Another possibility to simulate indirect light is to put additional lights with a lower intensity level directly opposed to the main lights emission direction. This will help to lighten up deep shadows cast by the main light source. All in all, both methods are very poor approximations. Escpecially in architectural scenes the illumination in a room is dominated by indirect light reflected many times from the diffuse sufaces in a building. Global Illumination - Photon Mapping and Raytracing Now, with the newly implemented photon mapping algorithm, CyberMotion provides a global illumination model that combines the pros of raytracing - reflection and refraction - with the ability to render also the indirect illumination caused by diffuse reflections in the scene. Rendering a picture with photon mapping is a two pass procedure. In a preliminary run little packages of energy (photons) are emitted from the light objects in the scene. Similar to ordinary raytracing the path of this photons is traced through the scene and just like in raytracing photons are reflected from specular surfaces and transmitted through transparent objects. Photon scattering is a more realistic simulation of light because it comes closer to the natural distribution of light from its source. The Photon Map Each time a photon hits a diffuse surface, the position and properties of the photon are stored in a 3-dimensional data structure called the photon map. Simultaneously a diffuse reflection is calculated and the diffuse reflected photon continues its way through the scene until it is absorbed in the scene or lost in space. Depending on the render options, scene data and light settings a photon map can rapidly blow up to several millions of photons stored in it. Therefore, be sure to have enough RAM memory available (128mb minimum), before entering astronomical numbers of photons for the light emission parameters. Evaluation of the Photon Map using the Photon Pool After the photon map has been build during the first pass of the rendering process, an ordinary raytracing pass is started and the light data stored in the photon map can be evaluated to estimate the light incidence for a point on a surface. For a good estimation of the incoming irradiance a specific number of photons scattered around that point have to be gathered in the so called photon pool. The maximum number of photons to search for the photon pool determines the quality, sharpness and of course the rendering time. You should take into account that searching through a photon map that might hold millions of photon entries is not really a trivial thing. The size of the photon pool can be entered in the render options dialog. Appropriate values for the photon pool size lie between 250 and 800 photons when photon mapping is applied only for the indirect lighting, and 500 up to 5000 photons, when Global Illumination mode is activated with photon maps containing several million photons. As a rule you can say, the more photons in the photon map the more photons should be gathered in the photon pool, otherwise the pictures become a somewhat spotty appearance. Photon Mapping only for Indirect Illumination or as a Global Illumination Model In CyberMotion you can choose whether you want to apply photon mapping only for determining the indirect illumination in a scene or as a global illumination model for all of the light distribution in the scene: You can change between these two modes in the render options dialog. Photon Mapping only for Indirect Illumination With this option the photon map is only evaluated to estimate the indirect illumination caused by diffuse reflections in the scene. The main part of the illumination and shadow calculations will be rendered with help of the directly incoming light, taking into account the light incidence angle between the light ray coming from the light source and the surface normal vector, as it was described in the beginning of this chapter. Advantage: When applying a photon map only for indirect illumination then you can manage with relative small photon maps. To estimate the general area brightness in a small room you can do with photon maps of 50,000 photons and upwards. To soften the sharp shadows calculated from the direct lighting you can use multiple shadow sensors or insert area light objects. In general, shadows calculated from direct lighting are more accurate than shadows generated from a pure photon map - when photon mapping is applied as a global illumination model. If not enough photons are emitted towards the scene, contours of the shadows become to blurry and smaller shapes in the scene may not be hit at all. Disadvantage: The combination of the different illumination models requires an adjustment of the light intensities - on the one hand the intensity evaluated from the photon map and on the other hand the irradiance calculated from the vector pointing directly towards the light source. In the light dialog you can adjust the photon intensities via the "Intensity Correction " parameter. Once an appropriate correction value has been found, you can simply adjust light intensities as usual and switch to and fro form normal raytracing to photon mapping without further adjustments to the correction parameters. This example was rendered with photon mapping only for the indirect illumination. An area light object (NURBS patch with 49 shadow sensors) at the ceiling and 2 table lamps (36 shadow sensors each) standing in the shelves provide a warm lighting and soft shadows. For the photon mapping process the ceiling lamp emits 100,000 photons and the two table lamps 50,000 each. The effect is a soft area brightness illuminating the otherwise dark areas beneath the table and in the shelves. You can download this project file from our internet library if you like. . Another example (the project file for this example is part of the installation under "../projects/volumetricfire/candles_anim.cmo"). Two candles and a third light source behind the camera are emitting 50,000 photons each for the indirect illumination. Because of the reasonably small photon map a corresponding small number of only 350 photons can be entered for the photon pool size. To soften the shadows casted by the direct lighting again multiple shadow sensors were applied for each light source. If you would try to render the same picture with photon mapping as a global illumination model, then much more photons had to be emitted by the candles, otherwise the relative thin candle shadows would not be visible at all. This is totally independent of the complexity of the scene. To get clearly visible shadows from the thin candles certainly about a million photons were necessary for each light source. With it, you would have to enter a corresponding high number of photons to gather for the photon pool to get smooth intensity transitions and to prevent a spotty appearance. Example of an architectural outdoor scene and photon mapping only for indirect illumination (model: John Ridgway, Julia's House from the "Frontiers"-comic book series - you can download the project file from the internet library). This time a parallel sun light was applied (emitting 1 million photons, photon pool size of 1000 and 36 shadow sensors scanning the sun disc). When rendering outdoor scenes with parallel lights, you should note that all photons emitted by the parallel light source are directed towards a bounding box surrounding all objects in the scene. Of course, the plane object will be ignored when calculating the dimensions of the bounding box - we certainly do not want to scatter around millions of photons on an infinite plane. This is why the illumination of plane objects will always be carried out with direct lighting, no matter which rendering mode you select (see also: Material Dialog - Object Properties - Only Direct Light). To maintain the diffuse light interactions between ground floor and buildings you should create an additional base plate that is placed slightly above the plane and is adjusted to the dimensions of the scene. In the example depicted above this was realized with the grassy subsoil plate beneath the asphalt ground. Photon Mapping as a Global Illumination Model If you choose in the render options dialog, then all light interactions in the scene will be calculated by evaluating the photon map only. Advantage - The whole illumination is made in one casting, there is no need to combine and coordinate the different lighting models. Also, the calculation of shadow sensors can be omitted, since all light scattering is provided by the photon map and shadows will result automatically from the distribution of photons in the scene. Because of the averaging of intensities about many photons shadows become very soft and the whole scene gets a very natural atmosphere. Disadvantage - As described earlier in this chapter the shadows often become to soft, especially when emitting too few photons. To get accurate illuminations of minor details in the scene you have to emit a great number of photons, usually a million and more. Furthermore, you have to enter a corresponding high number of photons to gather for the photon pool to get smooth intensity transitions and to prevent a spotty appearance (about 600 up to 2500 and more, depending on the size of the photon map). This requires a fast cpu and a high memory capacity of at least 256mb and more. Before rendering pictures demanding so much computer power and time you should always render preview pictures with a lower resolution of emitted photons and a smaller number of photons to gather in the photon pool. The light intensities are distributed consistently among the whole of all emitted photons, so that the picture brightness is constant and independently of the number of photons emitted. Therefore, you can render good test images using photon resolutions of 50,000 up to 100,000 photons for the emission and photon pools with 250 up to 500 photons for the evaluation of the photon map. Only when you adjusted scene settings, render options and light intensities the final rendering can be started with lights emitting millions of photons and a photon pool picking up about 600 to 5000 photons for each shading of a point on a surface. Example: This picture of a great hall was rendered in global illumination mode. The only light source is a spotlight placed in front of a side entrance and thus simulating sun light penetrating through an open door. The spotlight shoots about 2 million photons through the side entrance into the hall. For the evaluation of the photon map the photon pool size was limited to 700 photons. Initially all photons are directed along the side corridor, but you can clearly see that actual all parts of the hall are illuminated, even at the end of the main corridor where the camera is positioned. This is due to the manifold diffuse reflections the photons have carried out tracing their way through the architecture. The picture was rendered on a P4, 2.54GHz with 256mb RAM in a resolution of 800 * 600 pixels and antialiasing switched on in only 13 minutes. That's quite respectable I think (as compared with other implementations rendering whole days for some boxes or spheres in a bare room). This pictures shows the same scene when rendered without photon mapping - just pure raytracing and direct illumination. Because there is no diffuse interaction of light in the scene only the side corridor is illuminated. Nevertheless, you can discern the columns in the main corridor, but that's only because the fog function has been switched on, so that the shapes get brighter with distance and fog density. Of course you could also switch on the constant ambient light, but columns, walls and ceiling are designed in one basic color, so increasing the light intensity with a constant term would not add to the visible detail. In such a case, if you really want to evade using photon mapping, you would have to place some low intensity lights at the corridor ends to simulate the light reflected from the walls. Once again the hall rendered in global illumination mode, this time illuminated by 16 burning torches creating a dim atmosphere in the hall. Although using much more light objects in this example, the rendering time is not much higher than in our first example of the daylight scene with only one spotlight in front of the side door. This is due to the fact, that only the whole amount of emitted photons and with it the size of the final photon map counts for the rendering time. In the example depicted above only 60,000 photons were emitted for each torch light, adding up to only 960,000 photons. That's just half of the photon number the single spotlight emitted in the day light example. Photon Mapping and Landscapes Photon mapping is ideal for rendering landscapes too. The diffuse reflected photons in the terrain create subtly graded tones even in areas that lie in the deepest shadows. The picture above was rendered in global illumination mode and one million photons emitted by the sun light. Because no additional shadow calculations are involved (in contrast to direct lighting) the rendering time added up to only 30 minutes (compared to 11 minutes in raytracing mode with shadows). The picture rendered on P4, 2.54GHz, 256mb RAM, picture resolution 800 * 500 pixels, with antialiasing and a terrain resolution of 700,000 facets. In contrast to architectural scenes the photon pool can also be limited to small numbers of photons to gather (400 photons in the example). When gathering too few photons for the irradiance estimation than plain surfaces appear spotty and disturbing, especially in architectural scenes, but in terrains with high detailed landscape textures this effect is barely discernible. Caustics The examples discussed beforehand present mainly the advantages of photon mapping regarding the diffuse interactions of light - when light is reflected randomly from rough surfaces. But with photon mapping you can also trace the path of photons that are reflected from highly specular surfaces or transmit through transparent materials. The visible light patterns arising from these reflections are so-called caustics. Example for caustic light reflections beneath a little glass figurine, caused by photons that were refracted when transmitting through the glass. The project file is part of the CyberMotion installation under "/projects/caustics/ant.cmo". Caustics Photons are stored in a separate Photon Map, the so-called Caustics Map: Usually a photon map is evaluated only for the indirect illumination in combination with direct light for the main illumination and shadow calculations. For this purpose it will do to emit only several ten thousands of photons into the scene so we can average the general area brightness at each point in the scene. Caustic reflections, on the other hand, are often sharp outlined light patterns, like in our figurine picture shown above. It would be impossible to render these light reflections when only a few photons had been scattered around - you wouldn't even see a glimpse of the light focused beneath the figurine. Now, it would be also ridiculous to emit millions of photons into the scene and having to evaluate huge photon maps afterwards, only to cover the light reflections of a little specular object somewhere in the scene. That's why we have to manage to different photon maps, one for the global photon map and the general illumination, and one separate caustic map only for those photons that have been reflected or transmitted via a specular surface before hitting a diffuse surface. The caustic map is build in a second photon tracing pass where additional caustic photons are aimed only towards such objects, that are highly specular or transparent and own the material attribute . The evaluation of the two different photon maps requires also separate photon pools. For the global photon pool much more photons have to be gathered for the averaging process, so that soft and clean light transitions can be calculated for the area brightness. However, for the caustics pool we need comparatively fewer photons, because we want sharp and clearly visible light reflections. You can adjust the size of the caustic pool in the render options dialog, as with the global photon pool. The emission of additional caustic photons can be switched on or off for each light object separately in the light dialog. You should activate the caustic photon emission only for lights that stand nearby or are directed towards objects, that own the object attribute . If you want to render a picture with the camera focused on a caustics object as the main part of the image, it is advisable to use a spot lamp for the illumination because then the stream of photons can be aimed directly towards the target object. The number of additional caustic photons emitted from a light source is entered via the light emission parameters in the light dialog. Instead of specifying a certain number of photons this time you just have to enter a factor that describes how much more photons per area have to be emitted towards the caustic objects than for the global photons. Take again the glass figurine as an example. The emission of global photons was set to 50,000 photons. For the emission of caustic photons the factor was set to the maximum value of 100 via the -Parameter. During the processing of the photon map in the first pass, when the 50,000 global photons are emitted, about 1100 photons found their way through the glass figure and were saved in the caustic map. Then the additional emission of caustic photons is started with 100-times more photons, this is 100 * 50,000 = 5 millions photons. But this is only a fictitious value that only specifies how much more photons per area are emitted in general. Since caustic photons are only directed towards caustic objects in the end "only" 200,000 photons find their way into the caustics map. This is more than enough for a sharp representation of the caustic light effects under the glass figure. This picture was rendered under the same conditions in global illumination mode. Only 100,000 photons are emitted from a simple point light source and a factor of 100 for the additional caustic photons. As a reward wonderful light reflections show on the wall and the basis. This demo is found in the projects folder of the CyberMotion installation under "/projects/caustics/perfum_photonmap.cmo". More Features of Photon Mapping: Deactivating Photon Mapping for Individual Light Sources Each light source in CyberMotion can emit photons for the photon mapping process. However, you can also exclude individual light objects from this process. If you switch on the option in the light dialog, then, instead of emitting photons, the corresponding light object will illuminate the scene only with conventional direct light algorithms, no matter which rendering mode is activated. You can use this function for little lamps in instrument controls or for lights far away in the background or, e.g., for spot lamps illuminating only small parts of the scene. To save rendering time just switch on this function for all light objects that do not contribute much to the general illumination in the scene. Deactivating Photon Mapping for Individual Objects You can exclude individual objects from the photon mapping process too. If you switch on the object attribute in the material dialog, then the object becomes invisible for photons and is illuminated directly instead by each light source. Possible uses: if rendering in global illumination mode (only photon mapping, no direct lighting) and a small object is not hit by enough photons to shade it accurately. Instead of emitting more photons you can exclude these small objects from photon mapping and they will be correctly shaded again with direct lighting. if rendering moving objects in an animation with a static photon map. If the photon map is calculated only once at the start of the animation then objects excluded from photon mapping can still move through the scene. For the plane object this attribute is switched on automatically. Since the plane extends up to the horizon it would be nonsens to waste photons above this infinite plane. Static Photon Map in Animation If you activate this feature in the render options dialog, the photon map will be calculated only once at the beginning of an animation. You can use this option to animate a fly through an architectural scene where the objects themselves do not move. It is also possible to exclude individual objects from the photon mapping process - those will be illuminated only with direct lighting - and then you can even render animations with a static photon map and moving objects, e.g. a car driving through a street lined with houses. .topic 200 Fine feathers make fine birds. Transferred to our 3D-world we could say that without a good surface texture the most complex 3D model is not much to look at. In the material dialog you will find a vast choice of possibilities to create your own textures or simply load pre-defined textures from the visual library. The Material Dialog Structure of the dialog, preview options, material management and material references The pages in the dialog: Material - Basic Material Attributes Basic material attributes like color, reflection, transparency and so on. Material - Object Properties Individual object properties that can not be referenced to other objects Procedural Textures Mathematical defined patterns and fractal structures provide an inexhaustible variety of possible combinations Landscape Textures Additional texture layers specifically for landscapes Bitmaps Put bitmap textures on objects or use bitmaps to control special material properties Waves How to create animated water textures VRML-Export Parameters Provide objects with an URL-address for VRML-export .topic 41 - Menu "Objects - Material/Color" - Short Cut: + "M". This dialog enables you to define the surface of each object using an extensive range of parameters. For instance, you can apply mathematically defined textures to generate surface structures similar to grained wood, marble, rock or multi-layered landscape textures on a fractal basis. These textures can be combined with bitmaps projected onto an object. You can apply several bitmaps at the same time, using bitmaps not only to texturize an object but also to control the reflectivity or transparency of the objects surface. With bumpmaps you can even simulate embossed structures on an object's surface. Or you can change a normal object into a luminous area light source with real light properties which you can adjust in the lights dialog. However, is it not necessary to specify all these possibilities at the outset - simply refer to the visual material library to select a required material. Use this material as a starting point and modify it to your own needs, every change will be immediately displayed in the large preview window at the upper left of the dialog. In the selector box below the preview window you can choose between several preview modes. You can select a primitive object for real time previews (Sphere, Cube or Cylinder) or the "Object" mode, where the selected object is displayed centered in the preview window. Last but not least there is the "Camera, complete scene" mode that shows a preview of the whole scene in camera view. See also: Visual Libraries and Preview Options Most material properties can also be animated (parameters that can be animated are indicated by a different emphasized background color). If, in Animation Mode, you change such a parameter in the dialog, then automatically a parameter keyframe will be generated for the interpolation of the material data in an animation. The Structure of the Dialog: The dialog is divided into several areas: The preview window at the top left The object selection right beneath the preview window The button at the bottom left to adopt material settings from a specified object The center part of the dialog contains five sets of material parameters that you can switch between with 5 tabs in the dialog's document header: - Material - The basic material settings and object-attributes, which are not necessarily related to the appearance of the material - Texture - The parameters for the procedural textures - Terrain - Additional fractal texture layers for landscape textures - Bitmaps - The parameters for bitmaps, bumpmaps, reflectivity- and transparency maps - Waves - Animated water textures - VRML - Enter object URL's for VRML export The right-hand part of the dialog contains the visual material library. Just select an existing material or extend the library by saving your own materials Object Selection In the object selection box you determine the object for which you are going to edit the material parameters. Select an object with the mouse - the object's name will be highlighted and the parameters of this object are displayed in the parameter fields. Objects that do not have material, such as the Camera, Background and light objects, are shown in gray and cannot be selected. Adopting Material of Other Objects: When, after a great deal of effort and patience, you have set up a new material specification for an object it would be very useful if these attributes could be referenced to another object on the press of a button. There is the button beneath the object selection box for this purpose. Having completed the settings for an object, click on the selected object once again in the object selection box with the left mouse button. The name of the object will appear in the field beside the button. Now choose the object in the selection box that you wish should assume the material parameters of the first object. Now simply operate the button in order for this object to accept the parameters of the other object. The material parameters are replaced by information giving the name of the object with the material specification. Underneath this info is the button. When you operate this, the dialog changes back to the functions for the material side so that you could edit the individual parameters of the object. When the material specification of an existing object is used, the same axes are also used for the texture on the other object. Thus, objects that overlap each other and are provided with the same texture are rendered with a continuation of the same pattern - so they appear to be formed from one piece of material. If you select the button, the object retains the current material attributes of the base object, but it uses its own texture axes in the depiction of the texture. This remains true if the original object is switched off. Adopting the Material for a Whole Branch in a Hierarchy If you hold the -key pressed when clicking on the button, the material of the referenced object ist adopted for all objects in the selected hierarchy branch. .topic 42 Select the tab in the header of the Material dialog to bring the material side to the fore. Object Color, Diffuse and Specular Reflection The diffuse reflection is the basic color of an object. Diffuse stands for a regular dull reflection of light, the object appears to be of a constant tone from all views. In contrast to diffuse reflection, specular reflection is the amount of mirrored light, i.e. highlights from light sources or mirrored objects in the scene. By specifying a particular color for specular reflection a filter function is applied for the mirrored light. With metal surfaces the specular color usually comes close to the diffuse color, resulting, for instance, in golden reflections on a golden surface. Other surfaces reflect the whole light spectrum, resulting, for instance, in white highlights on plastic or varnished surfaces. To edit a color simply click on the diffuse or specular color button. Select the "-->" button between the two color buttons to link up the diffuse color with the specular color. This saves time when editing metal materials with the diffuse color equal to the specular one. Reflection, Highlight and Roughness Reflection - Switching on this option includes surface reflections when rendering the object in raytracing mode, so that other objects and light-sources are mirrored in the object. If this option is switched off, only light reflections of light sources (highlights) are produced during rendering. The parameter value (0-1) determines how much light a mirror object reflects and is of crucial importance for the mirror attributes. A mirroring object having, for example, a reflection value of 0.3 only reflects 30% of the light striking it from other objects or light sources. Metallic objects, for example, generally have a very much higher reflection value of 90% or more. Reflections are generally rendered only in the raytracing procedure. Nevertheless, the settings for highlight and reflectivity are relevant to the other rendering modes, because mirroring of light sources, which generate highlights on an object, are built into each rendering process. Highlight - A value of 0-1 can be entered via this parameter box that determines the strength and radius of the highlight on a reflective object. A highlight is only rendered if the object also has a reflection coefficient, which is entered in the next parameter. Roughness - Adds an amount of unevenness to the object's surface. The higher the value, the rougher the surface is rendered. With a value of 0 the surface finish is completely smooth. Transparency and Refraction Transparency - With this option selected, the object appears to be transparent. The transparency value (0-1) determines the relationship of surface reflection to transparency in transparent objects - the amount by which the object is truly transparent. A value of 0.7, for example, identifies the following: 70% the surface intensity of the object is determined by the light-share that comes through the object from behind. The remaining 30% is decided by the color and reflectivity of the object. The closer this value approaches 1, the more transparent the object becomes. At a value near 0 the object is entirely opaque. In addition there is the portion of the intensity that originates from reflection (if reflection has been switched on for this object). Reflection in transparent objects is, however, more complicated to calculate than on opaque bodies. With transparent objects this depends jointly on the angle at which the light occurs, and also on the refraction coefficient of the object. These values are then modified with the reflection coefficient to produce the final reflection-value that is used. If, for example, you stand directly before a display window, you see comparatively little reflected in the pane - with glass and a vertical light-incidence of about 4%. On the other hand, if you look at a very much more acute angle, the reflective ability of the pane causes it to act almost as a perfect mirror. With transparent objects the tone of the object has a double meaning: On one hand the red, green and blue portions are important to the tint of opaque properties of the surface, on the other hand they also work as light filters. The higher the intensity-share of the relevant color-component, the more light of that color is allowed through. Take, for example, an object with the color values: red = 1, green = 0.5, blue = 0 and transparency value = 1, and is, therefore, completely transparent. Take now a light-ray passing through the object from behind, the red-portion of the light is allowed through in its entirety, the green-share only half, and the blue component of the light-ray is completely absorbed. This produces the characteristic color of the transparent object, which corresponds to the color entered. You need therefore give no great thought to individual color values and filtering attributes of the object, you need simply enter a suitable color, as indicated above. Therefore, for objects that do not filter any proportion of the light, you enter a very high figure for the RGB-values that also correspond to a very bright color. To obtain a clear transparent window, for example, you would assign the color white to the window-object. Refraction The photo-realistic calculation of real refraction in transparent objects is performed only when rendering pictures in raytracing mode. Optic density determines the refraction of the light-ray in transparent objects. A value of 1 corresponds approximately to the value for air, so that there is no refraction of the viewing ray worth speaking of. In comparison, the refraction value in water is approximately 1.33. When a light-ray enters a medium of a different optic density then also the phenomenon of total reflection can occur. If a light ray passes from an optically dense to a less dense medium and the incidence-angle of the light exceeds a given material-dependent angle, the light is totally reflected. In a closed body total reflection can occur. For example, a fiber-optic element is produced from thin fiberglass that includes an optically low-density layer at the surface and relays its information through total reflection of the light pulses at the outside-wall. Some refractive indexes: Air 1 Water 1.33 Glass 1.5 - 1.9 Rock salt 1.54 Diamond 2.47 Transparency and Overlapping Objects With closed objects the exit-point of the bent viewing ray is automatically calculated in one operation, in order to save valuable calculation time. Therefore, if you have a transparent shape (e.g. cube, sphere) surrounding an opaque object, then you cannot see the parts of the opaque object through the inside of the transparent object. There is, however, this option possible: If you switch on the object attribute "Render all Facets" in the object-attribute box on the lower left of the Material dialog, then closed objects are no longer interpreted as solid objects, but as air-filled hollow objects. In this event the objects that are inside the transparent object are also visible. Glow If the option is switched on, the object maintains the color assigned to the object - independent of illumination from other light-sources. The reflected and mirrored light-share from other light-sources is additional. However, the object is not considered as a light-source, i.e. there is no illumination of other objects or shadow casting from the object's glowing (do not mix that up with the object attribute which changes any object into a real area light source). The glow effect is painted only on the surface area of the object that is not covered by a bitmap. The glow effect can be switched on seperately for each individual bitmap on the bitmaps material page. This enables a television screen to be simulated. For example, a self-luminous bitmap can be projecting onto the screen, while the TV-casing is not self-luminous. Or do it the other way round, e.g. for a flat of illuminated buildings at night, with glowing blocks enveloped in bitmaps with no self-luminosity and masked holes at the window positions, where the glowing block is shining through. Further examples for objects that are self-luminous e.g. neon advertising-tubes - more impressing even with added halo effect - or light-emitting diodes in a switchboard. The glow parameter varies the strength of the self-luminosity - with a self-luminosity of 0 the object behaves exact as all other objects, therefore the color rendered uses only reflected and mirrored light from the background lighting. At a value of 0.50, half of the inserted color-brightness is used as self-illumination, and at 1.00 the object shines with the full intensity of the inserted color - again plus the reflected and mirrored light shares. No shadows Objects indicated as having this attribute do not cast a shadow onto other objects (i.e. self-luminous objects or object transformed to area lights or window glass that does not filter light). However, you can also decide for each light source separately in the light dialog with the option , whether it should contribute to rendering of shadows. Halo The halo effect is the natural addition to the glow function. It creates a halo of light enveloping the outline of the object. You can use this effect to simulate atmospheric halos around planets or, for instance, to create swirling swarms of glowing particles. The halo effect is produced entirely at post processing by means of object- and z-buffers after the rendering of the picture. Therefore you can't see halos through transparencies nor are they mirrored by other objects. The radius of the halo is defined in pixels and can be set via the "Pixel Radius"-parameter. Next to it you can select the appropriate halo color. In case the glow effect is also applied you should adjust the halo color to the object color. Overlay Object - If this button is switched off then the halo is only drawn around the object, otherwise the outline of the object will be partly overlaid with the light halo, similar to the reflections and filtering in a planet atmosphere illuminated from behind. Examples: On the left 4 NURBS area lights were provided with a little additional light halo surrounding the light areas. The picture in the middle demonstrates a combined scan-halo effect - the head, with activated object halo, is uncovered slowly in an animation by a simple box object which gives the impression of an materializing head (see also project file "\projects\headhalo.cmo"). The picture on the right shows some glowing meteors with halo effect. Texture Blur When rendering objects lying deeper in the background, a single pixel of the screen obviously cannot display all the texture details represented by this screen pixel. Using only a single hit point from the object's surface would be like picking a color at random from the object, resulting in a noisy and flickering appearance. This will be even more disturbing when animating the scene. You could of course reduce this effect by applying a higher oversampling rate (antialiasing), but that is very expensive in rendering time and at great distances, e.g., at the horizon of planes, up to thousands of subpixels have to be computed. Instead, we apply a special filter to blur the texture pattern with increasing distance, thus reducing the noise at almost no extra cost in rendering time. The blur function will also smooth the effect of surface normal distortion with increasing distance, for instance, the normal distraction of procedural textures, landscape textures or the waves function. This is necessary because disturbing noise is not only generated from colorful texture patterns but also from the flickering light reflections on uneven surfaces simulated by normal distortion. Example: The illustration on the left shows a simple sphere standing on a striped plane. Normal distortion was switched on for the stripe pattern to intensify the problem. Neither anitaliasing nor texture blur was applied for the rendering of the first picture. The noise caused by the texture pattern is clearly visible in the background. For the second picture antialiasing was switched on at a maximum quality level of 4 (25 subpixel per pixel). The quality of the picture increases dramatically but the same is true for the rendering time. The illustration on the right shows the same scene with a low level of antialiasing (level 1 = 4 subpixel per pixel) and the textur blur filter switched on. As a result you get the best quality and the rendering is much faster because of the lower oversampling rate. Object Properties At the bottom of the material page yet more object attributes can be set that are not necessarily related to material/ texture formation. All these attributes are saved in the object file - not in a material-file - and are not transferred from other objects with the "Get" function (because each object has its own attributes where the material settings otherwise remain the same). Go on with the chapter Material - Object Properties. .topic 45 At the bottom of the material page more object attributes can be set that are not necessarily related to material/ texture formation. All these attributes are saved in the object file - not in a material-file - and can not be referenced from other objects with the "Get" function (because each object has its own attributes where the material settings otherwise remain the same). Object Properties - Photon Mapping For a deeper understanding of photon mapping see also: Photon Mapping - Introduction and examples Render Options - Global Illumination - Raytracing + Photon Mapping Light Dialog - Photon Emission Parameters The following two options influence the behaviour of objects when rendering in photon mapping mode: Caustics - Aim Additional Photons at Object - This option is relevant only for highly specular or transparent objects that cast caustic light reflections. If the attribute is activated, then additional photons are aimed at this object to get a higher resolution for the light reflection patterns, that arise from the photons reflected and transmitted through the object. Deactivate this function for ordinary windowpanes - If the option is deactivated, the photons will transmit in a straight line through the object, without being deflected by the refractive material. Only the light color will be filtered by the material, as in conventional raytracing without photon mapping. This is particularly useful for ordinary windowpanes. In general the light transmitting through window panes is intended for the general area illumination in a room, contributing to the global photon map, and not for unnecessarily caustic reflections. Only Direct Light - No Photon Mapping - You can exclude individual objects from the photon mapping process. If you switch on the object attribute , then an object becomes invisible for photons and is illuminated directly instead by each light source. Possible uses: if rendering in global illumination mode (only photon mapping, no direct lighting) and a small object is not hit by enough photons to shade it accurately. Instead of emitting more photons you can exclude these small objects from photon mapping and they will be correctly shaded again with direct lighting. if rendering moving objects in an animation with a static photon map. If the photon map is calculated only once at the start of the animation then objects excluded from photon mapping can still move through the scene. For the plane object this attribute is switched on automatically. Since the plane extends up to the horizon it would be nonsense to waste photons above this infinite plane. Interpolation Interpolation gives the surface of an object a rounded appearance, producing a very much more realistic depiction, especially on swept objects. If you create, for example, a sphere from the primitives menu, which is formed from relatively few points and facets, you will see that without interpolation: the sphere is only approximated and the triangular facets are easily seen. the outline of the sphere is not round, but has corners due to the edges of the facets along the outline. If you switch on interpolation for the object then the surface normals of the facet and the adjacent facets are included in the calculation of the illumination of the facets, resulting in the impression of a curved surface. This makes the object look smooth and rounded. However, only the facets are shaded, the outline of the body remains the same. The difference is shown in the illustration. Interpolation Angle It is often useful to exclude some part of an object's surface from interpolation - the top and bottom surfaces of a vertical cylinder, for instance. For this reason interpolation is made adjustable by setting the maximum angle at which adjacent facets are involved in the interpolated shading. If the angle between adjacent facets exceeds this value the surface calculations of either of the facets will not be influenced by the other. The screenshot shows three representations of the same object, but each of them has a different setting for interpolation. On the left you see the object with no interpolation. In the middle the object is interpolated to a maximum angle of 180 degrees, ie. all facets are used for interpolation calculations. However, you will see that the result is not what you would expect from an interpolated rounded surface: as the covering surfaces are also involved in the interpolation their triangular shape is responsible for making the object look somewhat crinkled. To the right, you see an object with interpolation set at a maximum of 60 degrees. The angle between the sides and the covering facets is 90 degrees so the interpolation for both types of surfaces is calculated independently. The sides still look rounded, since their respective angle is less than 60 degrees, resulting in mutual interpolation. Here you see an example showing the same effect on a spherical object. From left to right, it is shown without interpolation, interpolation to a maximum of 22 degrees (involving mutual interpolation only for the uppermost and lowermost facets), and interpolation to a maximum of 60 degrees, where all facets are smoothed in respect to each other. Excluding Single Facets from Interpolation In the "Edit Object" menu you have the additional possibility of marking individual facets and switching off interpolation for this group of facets. Clicking the "facet interpolation" button will switch off interpolation for all selected facets. Using the "Facet interpolation" - button will reverse this function. The screenshot shows a sphere with its middle facets excluded from interpolation. Render All Facets When an object is drawn it is not always necessary to consider all the facets of the object on the depiction. If, for example, you construct a sphere, then essentially only the front hemisphere needs to be drawn - the back half cannot be seen. There is a significant speed advantage when rendering if you can identify one of the following attributes: On construction of an object the facets are created so that the normals (vertical vector on the facet) are always facing the outside. When rendering the picture, only those facets are drawn for which the facet-normal is directed towards the camera's viewpoint (for angles <= 90 degrees). However, this applies only: on closed objects that you cannot see into. In the illustration you see an example of a three-quarter sphere - having an opening without covering facets. On the left half of the picture you see the sphere where all facets are to be represented. On the right side you see the sphere as it is drawn if only those facets are shown where the normal is visible to you (angles <90 degrees). on objects in which the normals are uniformly aligned during object creation. This occurs on all objects constructed with CyberMotion. With the help of the button you can decide if the facet normal of the object should be considered when rendering. All the facets are drawn if the function is switched on. When producing objects in CyberMotion, theoretically, you need not be concerned with this function, as the program recognizes if the object in use is closed or open (i.e. a ribbon object) and automatically uses the correct method. However, the manual application of the function could be important in the following circumstances: When you import objects of foreign formats (of the type [DXF] or [RAW]) that do not contain the necessary information for aligning the normals. CyberMotion attempts to automatically assign sensible normal alignments to the imported object, however, this is not always possible. With foreign formats, therefore, the option "Render all Facets" is also always automatically switched on at first. In the material dialog you can preview most objects without this function to see if the object is then rendered correctly. If not, switch on the "Render all Facets" again. In many cases the normals are uniformly aligned when the object is imported, but it could happen that the normals are then all shown to the interior instead of to the exterior. In this case, you can switch off the option "Render all Facets" and then go into the "Edit Object"-work mode and apply the function "Invert Normals" for the relevant object. This function completely reverses the alignment of the normals of all selected facets of an object so all its normals are shown to the outside again. If you work on objects in "individual points" mode, it can happen that some facets are turned inside out and with it the facets are opposite handed, due to the orientation of the points, for example. (see illustration). In such an event you should switch on the option "Render all Facets" also. If you add new facets in "Edit Object" work mode, the normal orientation has to be checked and inverted, if necessary. Or just switch on the attribute. Converting an Object into an Area Light Source Area light sources, as they are often used in modern architecture, e.g. in light panels, are hardly to simulate with standard point light sources. However, in CyberMotion you can convert any object you like into an area light source, just by activating the object property. With it, each point of an object will be interpreted as a small subordinated point light source contributing a little share to the whole of the objects light intensity. As a result the rendering time for the picture calculation - especially when rendering shadows - increases with the object's point resolution, since each point is included with a separate light- and shadow feeler in the illumination process. NURBS-patches are ideal area light objects because of their regular structure with evenly spaced points forming the surface. And apart from that you can change anytime the point resolution of NURBS-patches, e.g. to render faster preview pictures with low point resolution and then changing to a higher NURBS resolution for the final rendering. At least NURBS-patches do not have a thickness, so they can be easily installed in wall panellings. Only a maximum number of 200 light- and shadow sensors will be calculated for each area light object. If the object construction contains more points, then samples are picked randomly from the object. Adjusting the Light Properties for Area Lights - Once you have activated the attribute, the object will also be listed in the light dialog together with all standard light types. Then, if you change to the light dialog, you will be able to edit the light parameters for the area light object. You can define the light color, intensity and also the photon emission parameters. Of course, area lights are included in the photon mapping process and therefore area lights can emit photons just like all standard lights, too. The light color is independently of the material color. Think of the object as an ordinary body or as a container for a light source. If the light is switched of or is shining very dim, you still have to take into account the light reflections from the container, if it is illuminated by other lights in the scene. Therefore the object material is calculated and interpreted as an ordinary object surface with all of its possibilities, e.g. bitmap textures, reflection or transparencies. Only then, the self-luminosity is added to the materials color with the light color. In an animation, for instance, you can animate the light color from dark to bright and the object will slowly begin to glow and illuminate the scene. Part of this interpretation is, that area light objects cast shadows, when they are illuminated by other brighter light sources. You can switch of the shadow casting bei activating the attribute for an area light. Two examples of area lights. In the picture on the left 4 NURBS patches converted to area lights illuminate the room. In the right picture the glass sphere was changed into an area light source. Convex With this button you tell the program if the object has a convex surface. Objects with convex surfaces have the following attribute: If you take any two points on the inside of the object and connect them with a straight line the line at no point cuts the object's surface. Ellipsoids, prisms, cubes, cylinders etc are examples of such objects. A convex surface is important in the depiction of an object in raytracing mode, where shadows, transparency or reflections are to be calculated. It is also important in some Boolean operations. With an irregular-shaped object, for example, reflections of the object can appear in itself or the object throws a shadow on part of its own surface. This cannot happen with a convex object and, as an object is always formed from many individual faces, considerable time can be saved on rendering if you tell the program if the object's surface is convex. During calculation of reflections, refraction and shadows the program checks only non-convex objects to determine if they throw a shadow on themselves, if parts of the object reflect in other parts of the same object, or if a refracted outgoing ray can re-enter the same object again. Particles - Reflector If you define particle-actions and have switched on the collision-test in the particle-editor, when an animation is generated particles are examined for collisions with all other objects for which the function has been switched on. Point = Sphere A very interesting effect can be achieved with this option, which dramatically changes the appearance of an object. If "point = sphere" is switched on, the object is no longer constructed from facets. Instead, all points of the object that previously defined the corners of the facets are used as centers for spheres. The radius of these spheres is set with the parameter beside the button. These spheres can only be rendered in raytracing mode, as they are defined as analytic objects with center and radius. In the illustration you can see two identical objects, in which the only difference is that the right-hand object is rendered with the option "Point = Sphere". .topic 43 Choose the tab in the document header of the Material dialog to insert the parameters for a procedural texture. Procedural Textures Procedural textures are usually mathematical structures that generate a three-dimensional pattern - the coloring of an object results from the position of the surface of the object in relationship to the three-dimensional pattern. They enable simple-to-describe patterns like grid, block, and ring structures but also very complex patterns generated from fractal techniques to simulate grained wood, marble, rock or even multi-layered landscape textures. Furthermore, procedural textures are not restricted to the manipulation of the surface colors. They also allow the surface normal of the object to be influenced, so that you can simulate surface irregularities. The structure of the different texture patterns follows the principles below: Starting from the center of the texture axes of the object, a three-dimensional pattern is calculated along the texture-axes. The object is therefore "within" this 3-dimensional pattern, which unrolls over the entire surface. The color of a point on the surface of a textured object is simply from the corresponding point within the 3D-matrix. By moving, scaling or rotating the texture axes, you simultaneously move, scale or rotate the 3-dimensional pattern. This is achieved in the three corresponding work modes "Move Texture", "Scale Texture" and "Rotate Texture". Activate a Procedural Texture Simply switch on the -Button on top of the page to provide an object with a mathematical texture pattern. Scale a Procedural Texture Once you have correctly adjusted the size relationships for a procedural texture you do not need to repeatedly adjust the total size for all parameters. You can use the button beneath the button to apply a global scale to the texture pattern. This can also be achieved outside the material dialog using the texture scaling functions in the corresponding "Scale Object/Texture" work modes. Material Color, Texture Color or Color Range Most procedural texture pattern consist of two basic colors, the material color and the texture color. The basic material color (diffuse and specular reflection) is input on the material page of the material dialog. The texture color is part of the respective texture pattern and is defined here on the procedural texture page. Only exception is the color range texture. Instead of the texture color then a color range is applied and the basic material color is ignored. Texture Color = Material Color - This option deactivates the texture color for procedural textures. Internally the basic texture pattern is still calculated, for instance, a block texture, but you will see only a one colored surface because all parts of the texture pattern are painted in the same color. What's the use of it you ask? Well, the normal distortion function can still be applied to the underlying pattern function, e.g., to add a raised tile structure from a block texture or just to add some irregularities to the surface with random normal distortion. The function just saves you time when you want to add such structures to a one-colored surface by setting the texture color automatically to the material color. Of course you could do it also manually. Texture Patterns In the select box beneath the button you can select the desired pattern. A set of selection parameters - dependent upon the pattern selected - appears underneath the select box. Block Texture The block-texture is similar to the texture of a chessboard, except you not only get a structure in the horizontal direction, but also vertically. Furthermore you are not restricted to a cube pattern. The dimensions of a block can be input individually for each axis with the "Block Dimension"- X, Y, Z parameters producing a rectangular block pattern. Next to the block dimension is the "Net Width" parameter. If this value is not 0, instead of the checkerboard-like pattern you now get a tile-like structure. The X, Y, Z parameters are again responsible for the dimensions of the tile-block. The net-width parameter determines the separation between the individual blocks. If a normal distortion is also applied to the block pattern an impression of rounded edges at the rims of the tiles is achieved. See also: normal distortion. There is another parameter left for the block pattern - the "Row Offset" moves every second block row to the side by a given amount, thus giving the impression of a brick structure. The picture shows an example of this brick texture (x = 34, y=16, z=34, net width = 2, row offset = 17). Again, a surface normal distortion simulates rounded edges. Stripe Texture To define a stripe texture, you need only to edit 2 parameters - the widths of the two colors forming the stripe. Additionally you will still see the three X, Y, Z buttons. With them you can determine which of the three object-axes the stripes should follow. Ring Texture The parameters for the ring texture are exactly the same as for the stripe texture. The two parameters this time determine the widths of the two colors forming the rings. The X, Y, Z buttons determine the axis that acts as the center of the concentric rings. If with the ring texture, you additionally apply a distortion of the texture pattern and a color interpolation of the two texture colors, then you can obtain wood like structures as in the right picture above. Sphere Texture This texture behaves similarly to the tile-texture but, instead of blocks, spheres are used whose spacing is arranged through the Distance parameter. The Diameter parameter defines the diameter of the individual spheres. However, do not be surprised if you see only small circles on the object, despite having entered a large diameter - your object's surface is intersecting just the surface of the texture-spheres, which are a three-dimensional spatial formation. If this is the case, you can slightly reposition the texture using the Move Texture function or change the Distance parameter or simply scale the whole texture using the global scale parameter on the top of the page. Two further examples: The left picture shows a sphere texture combined with a texture pattern distortion and color interpolation. The block in the right picture consists of two identical colors for the material and the texture color. But by switching on the normal distortion the sphere texture becomes clearly visible somewhat resembling protruding rivets. Color Range Texture The color range texture resembles the stripe texture. Only, this time the colors of the material and texture are replaced by a strip of a range of colors. If the button is selected then one color range strip follows after the other, otherwise the areas above and below the color range strip are painted in the start or end color of the color range, respectively. When you operate the button then the width of the color range will be automatically calculated so that it precisely covers the whole object. Three examples: The picture on the left shows a color range texture (combined with texture- and normal-distortion) that is intended for a landscape object with several layers of sedimented rock. The illustration in the middle shows a color range with fractal texture distortion and glow, thus resembling lava rock. The picture on the right demonstrates a rainbow color range on a transparent sphere to create a somewhat iridescent surface. Fractal Noise Texture The fractal noise texture needs only a value for random distortion in addition to the material and texture colors to produce texture patterns. This random texture distortion is based on fractal algorithms and can be applied to all texture patterns. In the fractal noise texture, the value provided by the fractal algorithm is used to mix the texture colors while the other texture patterns use this value for a distortion of the texture pattern itself. Fractal textures are the basis of each good landscape simulation. With help of iterative procedures high detailed patterns resembling ground structures can be generated. This texture provides high detail at virtually any resolution. You can zoom in on a surface and by doing this new details will constantly be discovered. (This is in contrast to a bitmap, for instance, where a camera zoom would begin to show the bitmap's individual pixels.) Example: fractal rock texture A fractal texture in CyberMotion is described by two colors, the number of iterations and a scale parameter. Furthermore you can activate the B-Spline-Interpolation, which is far more complex to calculate, but results in a smoother blending of the two colors. The number of iterations defines the level of detail for the texture. A single iteration will result in a very blurred pattern. Further iterations will add new details to the fractal texture. The illustration shows the same texture pattern, on the left side with a single iteration and on the right side with five iterations. Here again the same pattern, this time with a higher scale value for the fractal noise. The individual fractal patterns are rendered close together or wide apart depending on the scale value entered. Distorting Textures If you switch on the button random distortion is superimposed on the base pattern. For the distortion of the texture exactly the same techniques are used as described above for creating a fractal texture but this time the fractal value is used to distort a pattern instead of finding a suitable texture color. Low iteration values will result in a smooth and rounded distortion of the pattern - higher values lead to a more chaotic distortion. To reproduce the "random" distortion you can input parameters between 0 and 1, which initializes the random number generator every time with the same starting value. You can vary the appearance of the pattern by changing the starting value. On the left is a sphere with a procedural ring texture. On the right the same sphere with activated random distortion at a single iteration. The next picture shows the ring texture after 5 iterations. The pattern is certainly more chaotic but it's still lacking the smooth color transitions we know already from the fractal noise texture. That's no problem since there is an additional color interpolation button at the bottom of the procedural texture page. When this option is selected a smooth transition between the two texture colors is calculated. Hard to believe, but the marble sphere in the right picture above presents the same ring texture pattern as the one in the picture on its left, only color interpolation has been switched on. Sinusoidal Distortion Switch on the button if you want to superimpose a sine wave formation onto the pattern. Note that this is a 3-dimensional waveform. Superimposed on a stripe pattern on a cube-shaped object, for example, this produces an undulating pattern on 4 of the 6 sides of the block. On the other sides the wave peaks and troughs penetrate the cube sides and form circles or circular rings. A full sine wave runs over an interval of 2 x pi (about 6.28). By changing the Stretch parameters you can make the waveform spread in the horizontal plane or in the vertical, respectively. Ring texture with superimposed sine wave. Surface Normal Distortion The surface normal is a vector standing vertical to the surface and is used to determine the surface brightness in respect of the light-incidence and object color. Distorting the surface normal allows a raised appearance to be added to the surface structure. If, for example, you put in a stripe texture and switch on the distortion of the surface normal, this distorts the normals towards the edges of the stripe-pattern. In this way, the calculation of the light intensity creates the impression that the surface falls away at the edges of the stripe. This normal distortion enables a wide range of possibilities. It would be very laborious, for instance, to produce a tiled background. Without normal distortion for every single tile you would have to create and position every individual tile object. Instead, you can create them all with only a single object formed from a rectangular block, with a block texture and normal distortion assigned to the block pattern. Normal Distortion According to Texture Pattern or Random Normal Distortion In the select box next to the button you can choose between different distortion modes for the texture. For all ordinary patterns (block, stripes, rings and spheres) you can choose a normal distortion according to the texture pattern alignment. In this case only the edges of the pattern are distorted as explained for the stripe pattern above. The illustration shows a sphere with a ring texture, material color yellow and texture color green. The second picture demonstrates normal distortion for the material color only ("Material Color" selected in the select box). When you choose the "Texture Color" entry in the select box then only the green stripes will be distorted (3rd picture). The last picture presents the sphere after "Material & Texture Color" has been chosen for the normal distortion. Range With the Range parameter you can restrict the area over which the normal distortion applies. Take for example the tile pattern. With a small range only a narrow area of the edges is distorted. With a value of 1 the distortion stretches over the whole tile resulting in a large bump on the tile. Scale (-1 to 1) With this parameter you can influence the apparent height of the normal distortion. At low values the normal at the edges of the side curves only a little, at higher values it is more. An interesting possibility is to also use negative values. Again, take the block pattern for the demonstration. Using a positive value the normal is distorted so the tile pattern appears to be raised. Use a negative value instead and the direction of the normal distortion turns and the pattern appears to be indented. Random Normal Distortion In addition to the three pattern-controlled distortion modes there are two modes for random normal distortion. When selecting a random mode the Range parameter is replaced by three separate parameters, each controlling the amount of distortion for the individual x-, y- or z-texture axis. Scale - With a high value the distortion pattern is very narrow and at low values the random pattern becomes more stretched. Distortion - Controls the basic amount of distortion for all axes. This initial distortion is then adjusted by the three separate scale values for the individual x-, y-, z-axis. B-Spline - The surface irregularities appear deeper and rounder but rendering time is increasing considerably. Random The illustration demonstrates a fractal rock texture with a random normal distortion. The distortion values: x=0.30, y=1.0, y=0.48. By using a higher y-value the normal distortion along the y-axis is emphasized - the structure appears to have horizontal ridges. x=0.75, y=0.18, z=1.0. This time the y-axis is undervalued and the ridges move vertical. An ideal texture for the bark of a tree. x=1.0, y=1.0, z=1.0. Scale=1.0. High random distortion values combined with a green fractal noise texture. These are the basics for a grass texture. Random & Texture There is a last normal distortion mode that combines random normal distortion with pattern controlled distortion. This is mainly intended for color range textures. Color ranges are best suited for sedimentary rock layer textures. With additional Random & Texture-controlled normal distortion you achieve a normal distortion that follows the alignment of the color range stripes and also overlaid by a random distortion, thus better representing the sedimentary and ridged horizontal structure. To achieve this result the alignment of the color range texture must be related to the distortion of the individual texture axes. If, like in the picture, the color range runs along the y-texture axis then the random normal distortion for the y-axis should also be high, while the x- and z- parameters should be at a very low level. Color Interpolation of the Texture Colors The last box on the "Texture" page contains the parameter for color interpolation. If you switch on this option the two texture colors fade from one into each other (available only for block, stripe, ring, or sphere textures). The parameter only controls the area over which the color transition is applied. Here again the example of the ring texture, on the left side without color interpolation and on the right side with full color interpolation over the whole ring width (color interpolation value = 1.0). For this wood texture only half of the ring width was interpolated (color interpolation 0.5), thus only the edges of the rings fade into each other. Well, I think you have realized that the possibilities of procedural textures are countless. Simply try out and play around with all the parameters - you can't do anything wrong and the preview window always gives a fast representation of the changes. The next page "Terrain texture layers" will introduce an additional set of procedural texture layers especially designed for landscape textures. .topic 107 When you select the tab in the header of the material dialog a side providing special landscape texture layers comes to the fore. On this side you can define up to 3 additional texture layers overlaying the object's basic texture. The texture pattern used to build each individual layer is again based on the fractal noise texture described in the previous chapter on procedural textures. Soil, grass and rock in one single texture - the result of a combination of a fractal base texture for the rock overlaid with additional texture layers added on the texture side. These additional texture layers are applied dependent on the slope angles and the height of the surface. For instance, you can define a white snow texture that covers only areas that lie high in the mountains and have moderate slopes. A random distortion and blending parameters provide additional irregularities and smooth transitions. See also: Tutorial - Landscape design The Parameters: Add Terrain Texture Layers - Each layer can be switched on or off separately. With the button a global switch is provided to switch on or off all layers simultaneously. Layer - Selector Box - the layer selector box lists the 3 layers for the soil, grass and snow textures in the same order in which they are applied to a surface: first, of course, the soil layer, then the grass and finally the snow. Simply select a layer in the select box to edit its parameters. On - Activates the selected texture layer. Layer Color and Mix Color - If the option is switched on then a fractal color pattern is calculated from these two colors, otherwise only the layer color is applied. Height (±%) - The Height value describes the fraction of the overall object height up to which the texture is applied. For instance, with a value of 0.5 the texture layer would cover only the lower half of the object. A special case is the snow layer. Usually snow lies in higher areas and disappears in lower and warmer heights below the snow line. To take that into account you simply enter a negative value for the height parameter. The height calculation is then reversed, starting from the top of the mountain and running downwards to the ground. Transition - Calculates smooth transitions with the ground at the edges of a texture layer. This parameter is available twice, on the one hand for the height-controlled blending and on the other hand for the slope-dependant transitions. On Slope Up To - An angle up to 90° specifies the slope up to which a texture layer will cling to the ground. Random Height/Slope - Adds randomness to the height and slope calculation to provide further irregularities. Additionally, texture layers can overlap each other when a random distortion is applied. In this illustration the grass layer covers the soil layer. Nevertheless, by using appropriate values for the Height, Slope and Random parameters the soil breaks through the surface in many places. Fractal Color Mix - If this option is activated a fractal color pattern is calculated from the Layer Color and the Mix Color, otherwise only the Layer Color is applied. Patchy - This option adds even more details and variety to the landscape texture. Texture layers with this option activated become a spotty appearance, no longer covering up the layers underneath but mingling with them instead. The Density-parameter controls the frequency of the patches, the higher the density the less gaps showing the underlying textures will appear. The basic size of the patches is dependent on the fractal pattern of the texture layer and therefore is controled with the "Fractal Color Mix"--parameter. The size of the patches can be adjusted with this slider even if the fractal color mixing is switched off. The Transition parameter ensures again smooth transitions at the edges of the texture islets. These pictures show the difference, on the left a snow texture without the function and on the right picture the same texture only this time with switched on for the snow texture layer. Normal Distortion - Corresponds to the random normal distortion described in the previous chapter on procedural textures. Each texture layer has available its own normal distortion. The normal distortion of an underlying texture layer is covered by the upper layer. For instance, a grassy plain covers the unevenness of underlying rock and soil texture layers but provides also its own chaotic normal distortion to display an authentic variation in the grassy surface. .topic 44 Select the tab in the material dialog to edit the parameters for bitmap textures. Bitmaps Bitmap textures are by far the simplest method of generating complex surfaces. For this purpose each point on the object's surface takes a corresponding point on a pixel graphic and the pixel's color simply used to calculate the object's color at that point. You can create material surfaces very easily with the help of a paint program or use pictures imported with a scanner to project logos or surface structures. You can, for example, build a city of skyscrapers from rectangular blocks and simply project the building fronts onto the blocks. Or, on an interior furnishing, scan a "da Vinci" and project the picture onto a "canvas" object. The list of the examples is endless. Additionally you can make use of picture files to influence the appearance of an object in an entirely different way. The brightness values of a picture can be interpreted as information to influence material parameters and thus control reflectivity (reflectivity maps) or transparency (transparency maps). The pixel values even can be interpreted as a value for distortion of the surface normals of an object and thus can simulate surface irregularities (bumpmaps). Furthermore you can apply alpha maps to create a fade over between overlapping bitmaps or between bitmaps and a procedural texture. Up to 12 bitmaps (plus optional alpha maps) can be applied to every object, each provided with its individual texture axis system to enable them to be freely positioned on the object. So, for instance, a box with a separate picture on each side is very easy to accomplish. More effects like bilinear filtering, bitmap glowing, mask color, tiles and bitmap sequences for animated projections complete the great flexibility of bitmap texturing. Add bitmaps To add a picture to the bitmap list for an selected object simply choose the button and select the desired file in the file select box that then appears. After that, the newly added bitmap will be displayed on top of the bitmap list. You can switch bitmaps on or off with help of the button beneath the list box. You can also click on a selected bitmap in the list box to change the status. Thus you can experiment with different bitmaps without having to reload and delete the files again and again. Bitmap file format: CyberMotion supports the following picture formats: *.BMP, *.JPG, *.PNG, *.PCX and *.TGA Bitmap paths: The program searches for bitmaps first in the current project folder. If the picture is not present there, the bitmap library folder (set up during the installation process of CyberMotion) is searched next. There are 4 additional paths for searching bitmap files, which can be stipulated in the Program Settings dialog after selecting "Customize" under the "File" entry in the menu bar. Overlapping Bitmaps - Texture Layers and Fade Over via Alpha Mapping The order in which bitmaps are listed in the list box is relevant. The lowest layer of a texture is always the objects surface, this is the material color or a procedural texture. Next layer to come is determined by the lowest bitmap in the list followed by the rest in increasing order. If an additional alpha map is assigned to a bitmap then the intensity values of the alpha map will be interpreted as fading values to mix the bitmap colors with the colors of the texture layer lying underneath it. This way a bitmap can be mixed either with a procedural surface texture or another overlapping bitmap. If no alpha map is assigned to a bitmap then it covers up all texture layers lying underneath it. Sorting the bitmap list - Next to the button are 2 additional arrow buttons to change the order of the listed bitmaps. If you add a bitmap to the list then it is automatically inserted on top of the list. If, for example, you want to apply this bitmap as a basic texture layer lying beneath all other bitmaps then simply select the bitmap and operate the ">" button to move it downwards to the end of the list. Example: (file "\projects\alpha mapping\alpha mapping.cmo") + + = On the left you can see a box object that has been given a procedural block texture. The next 2 pictures show a camouflage colored bitmap and an alpha map assigned to it to generate a fade over between the procedural texture and the bitmap. The picture on the right shows the final texture of the box object. + = Now we add an additional "Top"-bitmap to the bitmap list of the box object. There is no alpha map assigned to this bitmap, so there is no fading. Because the picture is on top of the list it covers up all texture information lying beneath it, as you can see in the picture on the right above. Now select the "Top"-picture in the list box and move it downwards to the end of the list. With it the "Top"-bitmap now lies above the procedural texture, which is covered by it, but underneath the camouflage bitmap. Because of the alpha map assigned to the camouflage bitmap you can still see the "Top"-bitmap shining through it. Replace, Copy or Delete a Bitmap - If you just want to replace a certain bitmap file, without having to change the basic settings and texture axis position then simply select the bitmap in the list and operate the button. A file select box appears and you can choose a new file to replace the old one. - Copies a bitmap including all settings and texture axis belonging to it. This function comes into use when you need to align 2 bitmaps exactly above each other. For instance, when you combine a bitmap with a bumpmap (or a reflection- or transparency map) of the same size. First you scale the bitmap to the right dimension and move it to the desired position. After that you would copy the bitmap in the material editor and change the mapping mode of the copied bitmap from bitmap to bumpmap. Finally you replace the copied bitmap file by the desired bumpmap file via the function. - Deletes a selected bitmap from the list. Mapping Mode - Bitmaps, Bumpmaps, Reflection- or Transparency Maps The mapping mode of a bitmap can be selected in the "Mapping" select box Bitmap - A picture will be projected on an object. This is the default mapping mode for newly added pictures. Bumpmap - The brightness values of a picture are interpreted as a value for distortion of the surface normals of an object. Thus you can simulate surface irregularities. For instance, to provide an engraving for an object you simply need to project a picture (bumpmap) with the required text onto the object and to select the bumpmap mapping mode. The illustration shows an example for the combination of a bitmap and a bumpmap. A bitmap with a marble structure was projected onto a simple rectangular block. Additionally, projecting the "BUMPMAP" -text onto the block created the bumpmap indicated above. The grayscales of the bumpmap are interpreted as a height map. By illuminating the block from the side the outlines of the text seem to protrude out of the block. The strength of the surface distortion is determined by the intensity of a picture point projected onto the object. A more grayish tone instead of black would cause a weaker distortion and therefore a less raised structure. Reflection Map - In this mode the brightness values of a picture are interpreted as a value for reflectivity and shininess. This can be used to apply a dull and non reflective bitmap on a reflective material or vice versa. The higher the pixel intensity of the reflection map the higher the reflectivity value. White corresponds to a material reflection value of 1, black corresponds to a non reflective material. In the picture above a reflectivity map was used in combination with a bitmap and a bumpmap to produce rusty spots on a reflective cylinder (file "\projects\reflection map\reflection map.cmo"). These textures were used: The left picture shows the rust texture. The white color of the picture background was masked out using the mask function. The mask color was set to white and a high tolerance value of about 40% entered. Thus the background of the picture becomes transparent and will not be projected on the object. The picture in the middle represents the reflection map. It is just a copy of the rust texture, grayed and darkened using the gamma correction function of an image editing program. The dark parts of the reflection map will reduce the basic reflectivity parameter set for the material. Again we mask out the white background color of the reflection map. Otherwise the white parts of the reflection map would also overwrite the basic material reflectivity and be interpreted as the maximum reflection value of 1. The last picture shows the bumpmap that was applied to give the rust a rough structure. Again a copy of the rust texture, grayed and somewhat lightened, so that the "engravings" don't become to deep. In a bumpmap only the deviation of neighboring pixel intensities is relevant for the distortion of the surface normal, so there is no need for a mask color. Transparency Map - With transparency maps its just the same as with reflection maps, only this time the brightness values of a picture control the transparency of an object. Black points in the transparency map are interpreted as wholly opaque whereas white points stand for total transparency. Transparency maps will only be applied on transparent objects, that is you have also to switch on the button on the material page of the dialog. (File "\projects\transparency map\transparency map.cmo") The picture above shows a half transparent block with an additional transparency map projection. The white letters in the transparency map are interpreted as wholly transparent and overwrite the half transparent material setting on the block. The background of the transparency map is black and therefore would be interpreted as opaque. Applying the mask function with the mask color set to black causes the program to ignore the background color of the transparency map. Reposition, Scale or Rotate Bitmaps All bitmaps are provided with an independent texture axes system. Moving, scaling or rotating these texture axes also influences the alignment of the picture projected onto the object. The manipulation of the texture axes is carried out in the same way as working on objects. If, for example, you have selected an object in "Move Object" work mode, you can change to the "Move Texture" work mode by selecting the corresponding index tab at the top of the tool window. After that you can choose in a select box, whether you want to move the texture axes of a procedural texture or the texture axes of one of the bitmaps assigned to the object. As a result, the corresponding axes grid is drawn - where the grid has exactly the same dimensions as the selected bitmap. You can now easily reposition the grid on the object, or adjust its size in the "Scale Texture" work mode or - last but not least - you can also rotate the pictures in the "Rotate Texture" work mode. Bitmap Projection Planes A picture can be projected onto an object in five different ways. You can select the required type of projection with this small popup menu: Plane-Projection The bitmap-picture is simply projected onto the front of the object. This can be compared with a slide picture projected onto an object. Each picture-point of the picture simply becomes the corresponding object-point on the X, Y plane of the object. With this type of the projection you can have two points on the object that intersect the projection line for each point on the bitmap - on the object's front and rear. On the front of the object the bitmap is correctly represented, on the rear a mirror reverse. By choosing one of the three plane-projection possibilities you can, however, change this: Plane > The selection ">" ensures that the projection is only seen on the front the object (fig. on the left). Plane >< With this projection the bitmap is also projected onto the rear of the object - however in reverse. Therefore, if you turn the object, the picture on the rear is correct (center fig.). Plane >> The picture projected on the front continues through onto the other side, and is therefore mirror-reversed on the rear of the object (fig. right). Projection Angle - With this parameter you can control the angle of incidence, under which the plane projection is "visible" on the object. You can demonstrate that best on a spherical object. The picture shows a plane projection onto a sphere with a given angle of incidence of 45°. A point of the bitmap will be projected onto the sphere if the angle between the projection direction (always along the z-axis of the texture axes system) and the object normal (the vector standing perpendicular on the sphere) is smaller or equal than the entered projection angle parameter. Possible uses: On the left you see a block that is to be textured with different bitmaps on each side of it. In front of the box a camouflage colored bitmap is applied with the bitmap dimensions slightly extending beyond the box dimensions. With a viewing angle of 90° arises the problem that the side walls of the box will be texturized too, which leads to the unsightly stripes depicted in the illustration. In addition, the undesired effect of overlapping bitmaps on each side wall occurs. A simple decrease of the projection angle parameter to 89° would put things right as you can see in the illustration above on the right. This illustration shows a sphere with a flattened front and a bitmap projected onto it. We want the bitmap only to be projected on the flattened section of the sphere. That is achieved by reducing the projection angle to 1°. Cylindrical Projection With cylindrical projection the picture is wrapped horizontally around the object in the way that you would affix a label to a bottle. Similar to the plane projection there are 3 different projection modes: Cylinder < - Only the outside of a cylindrical object is labeled. Thus you can project a label on a glass without projecting it also to the inner side of the glass. Cylinder > - Vice versa, the bitmap will be visible only on the inside walls of a cylinder. Cylinder >> - the bitmap gets through the object and is visible on inner and outer walls of a cylindrical object. When using cylindrical projection you must bear in mind the following regarding picture scaling in the X direction: When X is 1, the picture is scaled so that the picture wraps once about the object and exactly joins behind. If a small line is still visible between the 2 ends of the picture, just increase the scaling in the X direction a bit, until ends join together. Usually a value of 1.01 should be sufficient. With X less than 1, the angle of the cylinder projection is reduced, i.e. the picture no longer fits completely around the cylinder. In this way you can easily produce a label. With X greater than 1, overlapping of the projection would result. The picture is therefore simply cut off at the join. The scaling in the Y direction causes stretching of the height of the picture - as in plane projection. Spherical Projection With spherical projection the picture is again projected around the object, exactly as in the cylindrical-projection. This time, however, the picture also wraps vertically, so that it is cast over the whole of the object. The X, Y scaling is, therefore, the same as for the cylinder projection: X=1, Y=1 : The picture is exactly matched around the object. X<1 or Y<1 : label effect. X>1 or Y>1 : overlapping. Bilinear Filter This function smoothes a bitmap texture. Especially when small pictures are magnified through bitmap projection or you zoom in on an object, for instance in an animation, this can result in undesired block effects. On the left you see a detail of a zoomed object with a bitmap texture. Next to it the same detail is shown, this time with switched on for the bitmap texture. The difference in quality is clearly visible. Mip-Mapping With Mip-Mapping switched on for a bitmap the bitmap texture is smoothed with increasing distance. The Mip-Mapping technique generates additional bitmaps resized and filtered to lower resolutions. The original highly-detailed bitmap is used for surfaces in the foreground. For more-distant points one of the pre-filtered lower resolution bitmaps is applied to prevent noisy texture flickering in the distant parts of the picture (see also: Texture Blur for procedural textures) . It's in fact a little bit like the opposite of the "Bilinear Filter" function, which reduces pixel steps in the bitmap when zooming into a bitmap texture. Combining both functions you get the best results and always smooth bitmap textures. Example: The plane shown in the illustration above is textured with a tiled bitmap. The picture on the left was rendered with neither antialiasing nor Mip-Mapping switched on. As a result the distant parts of the picture appear very noisy. For the second picture antialiasing was switched on at a maximum quality level of 4 (25 subpixel per pixel). The quality of the picture increases dramatically but the same is true for the rendering time. The illustration on the right shows the same scene with a low level of antialiasing (level 1 = 4 subpixel per pixel) and the Mip-Mapping filter switched on. Although rendering is much faster due to the lower oversampling rate, the quality in the background of the picture has become even better. Tile It is often very laborious to draw a bitmap that repeats itself over and over - such as a brick wall or a grass texture. A large picture covering the entire object wastes a great deal of unnecessary storage space. Instead, you can use only a small detail that can be repeated alongside and above each other - this way you get an overall tile-like structure for the required picture. This is obtained simple by switching on the button. You can restrict the number of repetitions by specifying corresponding numbers for the x- and y-direction. Switch off the x- or y-button, respectively, if you want to repeat the structure infinitely. Opaque This option only affects transparent objects. Normally, with a transparent object the object's color serves as light filter and is always seen through - whether it is a bitmap or not. If, however, you switch on the effect "Opaque", the colors of a bitmap picture are regarded as opaque, while the rest of the object remains transparent. Animated Bitmaps It is also possible with CyberMotion to project complete picture sequences onto objects while rendering an animation. For example, you can easily construct moving and cyclic areas in a scene, on which a film (bitmap animation) simultaneously runs. Or you may have a picture sequence for a running change of object texture that you want to introduce into the animation. Even bumpmaps, reflection- and transparency maps can be animated, as long as the following requirements are met: Pictures must be available as a picture sequence in one of the designated file-paths. The pictures must be consecutively numbered, as, for example: "PIC0.TGA," "PIC1.TGA," "PIC2.TGA." Switch on the button - if the button is switched off (as normal) only a single picture is projected. Simply add the first picture to be used to the bitmap list and the program then automatically recognizes if a picture sequence is available and ties it in. If the animation is longer than the picture sequence, the sequence re-starts again with the first picture. Alpha maps: Alpha maps assigned to a picture can be animated too. In this case the numbering of the alpha map sequence has to be identical with the numbering of the picture sequence. If there is only a single alpha map than this will be used for all bitmaps in the sequence. Glow The same effect like object glow on the material page, only this time applicable for each individual bitmap. Use the glow effect, for example, to project a self-luminous bitmap onto a monitor-screen. Or project a picture with bright shining blocks onto house fronts or space ships to simulate illuminated windows. Use the function to make the background color of this "light"-bitmaps transparent. If an alpha map is assigned to the bitmap than the alpha map not only controls a fade over effect but also the intensity of the self-illumination of the bitmap. Mask Color Suppose you have constructed a yellow-colored sphere on which you want to project a line of black writing. If you set up the bitmap for projection you are forced to use the same background color for the writing on the bitmap as is used for the sphere. It would be better if the background used for the writing-bitmap was transparent and only the writing itself was projected. This can be obtained by switching on the option "Mask Color" and choosing an appropriate color that comes as near to the background color as possible. Then use the tolerance parameter next to the color box to allow a proportional deviation. Thus also deviations caused by antialiasing effects in the picture can be recognized. Alpha Map The alpha mapping function is switched on with this button. If an additional alpha map is assigned to a selected bitmap then the intensity values of the alpha map will be interpreted as fading values to mix the bitmap colors with the colors of the texture layer lying underneath it. This way a bitmap can be mixed either with a procedural surface texture or another overlapping bitmap. White points in the alpha map will be interpreted as wholly transparent, gray colors will control a fade over and black points are interpreted as complete opaque. You have seen already an example at the beginning of this chapter. You can choose an appropriate alpha map in the file selector box that appears, if you select the button next to the button. .topic 420 Select the tab in the material dialog to edit the parameters for water textures. Switch the button on if you want to generate a wave-like effect on the surface of the object. This function is especially intended for planes in combination with landscape objects. The waves are calculated only in the x-, z-plane of the texture axes of an object, so if you apply this function also to other objects than the plane object be sure to check the alignment of the procedural texture axes via the "Rotate Texture" menu. Scale - Individual waves are rendered very close together or very wide apart depending on the scale value entered. This can be best observed in the preview window in "Camera, complete scene" preview mode. The parameter defines how greatly the surface normal is bent by the wave effect - whether the waves are flat or high. It is also possible to animate the waves, creating the impression of a moving plane of waves by activating the option. The value controls the speed of the wave movement. With increasing velocity also the turbulent flow of the waves is controlled. If it happens that the waves appear a little bit too noisy in a still picture then reduce the velocity of the waves, even if you don't render an animation. With the movement direction of the wave field is input. .topic 76 Select the register in the material dialog to insert several additional parameters required for VRML 2.0 export. Here you can assign URL-links (internet related addresses like http://www.3d-designer.com) to each individual object. Then, in a VRML capable browser, you can change to other web pages simply by clicking on the corresponding 3D object. It is also possible to enter a link to another VRML project file (Extension '.wrl'). Clicking on the corresponding object would then result in a jump directly into the next 3D world. Text entered in the description field is displayed in a VRML browser when pointing with the mouse to the corresponding object. You can also enter a target frame in the 'Parameter'-field, e.g.: target=my_frame see also: VRML 2.0 Export .topic 160 The right illumination is the most important thing in computer graphics. Accordingly CyberMotion offers various possibilities to light up your scene, beginning from standard light types like simple point lights, spotlights and sunlight up to volumetric fire and area light objects of any form you like. In addition you can choose between different rendering algorithms, either simple direct illumination or complex global illumination models like photon mapping, a physical simulation of light distribution in a 3D-model. The Light Dialog Overview, preview options, light color and management of light objects The individual light types: Ambient Light - General Area Brightness The general area brightness - constant term or photon mapping Parallel Light - Sun Light Symmetrical incidence of sunlight Lamp - Point Light Source The point light source, a standard in all 3D-programs Spotlight Cones of light Volumetric Fire Walking through fire, why not? Area Light How to convert an ordinary object into a real light source and... Photon Mapping Emission Parameters With photon tracing each light source can emit millions of light particles to simulate the light distribution in a scene Light-Mapping How to use bitmaps to project colored light patterns Lens Flares Visible light and light reflections in the camera lens .topic 38 - Menu "Options - Light" - Short Cut: + "L". Light sources are managed entirely as normal objects in CyberMotion. Like all other objects, they can be worked on (e.g. moved and rotated) or copied and deleted. The only exception is the ambient light-source - which is responsible for the general brightness of the area. As there can be only one area brightness only one ambient light object is also possible, which can neither be copied nor deleted. You can insert as many light objects as you wish, provided that the maximum number of objects maintained by CyberMotion is not exceeded. Here, in the light dialog, new light objects can be created at any time on the press of a button. There are, however, two light objects generated at program start time, so that you already possess basic lighting for the preparation and editing of new objects and scenes. The ambient light source (for the general area illumination) and a parallel light source are generated at program start. If you have arranged a scene you can modify and develop this basic illumination with new light objects. The Preview Window The window in the middle of the dialog provides a quick preview of the scene when changing light settings. Depending on the selected preview mode it can also show an isolated view of the selected light object and its light effects. Right beneath the preview window you can choose one of the following preview modes: Lensflare, centered - if lens flare effects are assigned to the light object then a centered view of the selected light object is rendered. Note that the lens flare size in the preview is always adjusted to the dimension of the preview window and thus the lens flares in the real scene can vary considerably, depending on the distance of the light source from the camera and the incidence angle of the light. Camera, light objects - only the visible light objects are rendered in camera view. Camera, background and planes - a scene preview with all lights, background and plane objects in the scene. This is an ideal setting for quick previews when changing light parameters for complex scenes like landscapes, thus preventing time consuming redraws of the complete scene when you only want to change the sun position, for instance. Camera, complete scene - the whole scene is rendered in the preview window each time you change a light parameter. See also: general preview options like quality, resolution or automatic update. List of Light Objects At the top left of the dialog is a list box, in which all the existing light-objects are displayed. A particular light object can be chosen and the current settings for this light object are displayed and can be modified. Creating a New Light Object There are four buttons in the dialog field "Add Light Object" to generate new light objects. The new light is then automatically provided with a name and displayed in the list box. You have the choice of generating a parallel light, a lamp, a spotlamp or a volumetric fire object. In addition, you can convert any ordinary object into a real area light source. Switch On or Off Light Objects You can switch on or off light objects via the corresponding button beneath the list box. As with all other objects, you can switch light objects on or off in the select objects dialog, too. There you can also delete or copy them. Here in the light dialog, you can create new light objects or edit existing ones. Light Color Select the button to call up the color selection dialog. The color selected here is the color of light emitted by the selected light source. For example, a light-gray to white color corresponds to normal daylight. For a warm light in an area choose a rather yellow tone. Halo Color In addition to the button for the light color is another button - . Here you can edit a second light color that is not used to illuminate the scene, but for the generation of light halos for visible light sources and lens flare effects. Light Parameters Beneath the light colors all parameters related to the selected light type are presented. They will be described in detail in the corresponding chapters of the different light types. Photon Emission Parameters If you render in global illumination mode using photon mapping to simulate the distribution of light particles in a scene, you can specify the parameters for the emission of photons in the dialog area beneath the preview window. Depiction of the Lamps in the Viewport On leaving the dialog all moveable light sources, as lamps, spotlights and volumetric fire objects, are drawn in the viewport windows. A normal lamp appears as pictured above on the left. The dotted line circle about the lamp shows the halo radius entered for a visible light source, the solid inner circle indicates the light radius. So that you can identify and process the spotlight cone within a scene, a spotlight source is drawn as a vector object with ray-cone and a direction line. This allows a spotlamp to be easily arranged so that a target object is within the light cone. A volumetric fire object is drawn as a cylinder with a point light source inserted in it (picture above on the right). Parallel light sources do not require positioning and consequently are not shown. .topic 500 As mentioned already, there is always an ambient light object for the area brightness in the list box, which should simulate the general background lighting of an area through object reflections and light-scatter (e.g. clouds). Only the light's intensity (color) is relevant for this constant area-light. No other parameters are relevant to the ambient illumination and therefore cannot be selected. As there usually is a certain amount of background brightness, you should always switch on the ambient illumination - unless you want to create special effects (such as extra-sharp shadows for example). Area Brightness and Photon Mapping One of the major drawbacks in a general raytracing implementation is that it does not properly take into account the indirect illumination - the light that is reflected from other objects in the scene other than the direct light from a light source. Especially in architectural scenes the illumination in a room is dominated mainly by indirect light reflected many times from the diffuse surfaces in a building. This aspect of indirect illumination is ignored when applying only a constant ambient term for the area brightness. Now, with the newly implemented photon mapping algorithm, CyberMotion provides a global illumination model that combines the pros of raytracing - reflection and refraction - with the ability to render also the indirect illumination caused by diffuse interactions in the scene. Rendering a picture with photon mapping is a two pass procedure. In a preliminary run little packages of energy (photons) are emitted from the light sources into the scene. Similar to ordinary raytracing the path of this photons is traced through the scene and the distribution of the light particles saved in a 3D-data structure called the photon map. After the photon map has been calculated the picture is rendered in an ordinary raytracing run and the photon map is used to determine the area brightness when calculating the illumination for a surface point. For a detailed introduction to photon mapping see: Photon Mapping - Introduction and examples Light Dialog - Photon Emission Parameters Material Dialog - Photon mapping object properties Combining Photon Mapping and Ambient Light If you include photon mapping in your picture rendering, the Ambient light object is not switched off automatically. So you can freely choose, whether you still want to have some constant light added to the scene to lighten the picture a little bit up or not. However, the light color for the Ambient light should be darkened considerably if it is not switched off in combination with photon mapping. .topic 430 A new sun light object is created by operating the "Add Light Object" button in the light dialog. This light source is similar to sunlight, i.e. parallel incident light of uniform intensity. As the light comes from outside the 3D area only the angle at which the light occurs is input. Positioning this light source later is not necessary, so a parallel light object is not represented in the viewport windows. The parameters for a parallel light source Incidence angle - You can directly input the inclination and direction angles to define the incidence angle of the light or just click with the mouse into the instrument boxes and drag the pointer to the desired position. Number of Shadow-Sensors - Determines the number of shadow-sensors used to generate a soft-shadow effect. Standard light sources like the lamp, sun and spotlight are defined with a specific radius. With it, instead of originating from a single point-light-source, the light comes from a spherical area light-source. Then the given number of shadow-sensors are used to scan the light-sphere and to estimate how much of the light-area is hidden by other objects. From these results a soft shadow can be interpolated. If the radius of a light source is very high and the number of shadow-sensors low, for example, it produces the effect of several light-sources standing close to each other. The greater the number of shadow-sensors and the less the radius then the better will be the soft shadow effect. However, the number of additionally calculated shadow-sensors should be kept as low as possible due to the rendering time required. To render soft shadows with more than one shadow-sensor you also have to switch on the option in the render options dialog. No Shadows - The time required for the shadow calculation increases with each light source, especially if multiple shadow sensors are involved. On the other hand, you often require several light sources to correctly illuminate all the objects in your scene. But not all lamps cast shadows relevant for the picture rendering, e.g., lights illuminating areas in the distant background of a scene or lights used to create special highlights on an object or light reflections in the camera lens. In this event, it makes sense to only implement shadow calculations for those light sources used for general area lighting. For all other light sources you could switch off the shadow calculation with the option . Sun - Switch on this option if you want a parallel light source to be interpreted as a real sun that will be rendered in an atmospheric sky background. If you select the option, then automatically the effect will be switched on, too. But the visible sun object is handled differently than other visible light objects. In combination with a clouded sky background the sun illuminates the clouds. The sun is always rendered as a visible disc in the background behind the clouds and its intensity is filtered by the clouds and the atmospheric haze. Furthermore the color range used to produce the sky background can be distorted in the vicinity of the sun thus creating more realistic lighting in the sky around the sun. The normal lens flare effects are rendered in addition to the visible sun disc. That is why a lower intensity for the lens flare effects should be input when combining a sun light with lens flare effects. A good example of use is, for example, a sun setting behind a mountain with the direct light causing a lens flare halo in the camera lens shedding a warm veil of light about the mountain peaks. Other relevant settings for the parallel light object in the light dialog: Light-Mapping - How to use bitmaps to project colored light patterns Lens Flares - Visible light and light reflections in the camera lens Photon Mapping - The photon emission parameters .topic 440 A new lamp is created by operating the "Add Light Object" button in the light dialog. The parameters for a point light source This light source is similar to a light bulb - the light radiates in all directions from a single point. Light Intensity - Maximum Range In the real world the intensity of a point light source reduces in proportional to the square of the distance, i.e. by doubling the space between the object and the light source the light intensity is reduced to a quarter. In computer graphics, however, this does not lead to satisfactory results since most light sources in real life are not ideal point light sources. Therefore CyberMotion uses a special filter to reduce the light intensity with distance. Now, to enter an appropriate intensity for the lamp you simply have to specify a maximum radius up to the distance the light intensity will almost reduce to zero. For example, to estimate the required light intensity for a room you would determine the room dimensions via the Box-Dimension function in the Scale Object menu and simply use this value as a basis for the light intensity. Then use the "Camera-complete scene" preview mode in the light dialog to adjust the light intensity until the room is lit satisfactory. Light Intensity and Photon Mapping - If you plan to render your scene with the photon mapping algorithm you ought to follow this course: Adjust the light intensities with normal raytracing for the preview renderings. Then render a first test picture in photon mapping mode. To balance the intensity variations in the picture caused by the two totally different illumination methods, adjust the light intensity for the photon emission via the Intensity Correction value in the light dialog's "Global Illumination - Photon Mapping" parameter box instead of changing the general "Light Intensity - Maximum Range" parameter. Thus you can switch back to raytracing mode for faster previews when extending and editing your scene and light settings and in the end you can turn again to photon tracing without having to adjust again the light intensities for the photon mapping process. See also: Light Dialog - Photon Emission Parameters Light Radius - If you enter a radius for a point light source it is possible to render this light object as a visible spherical light source (see also: Visible Light and Lens Flares). But the light radius is also required to determine the maximum deflection for shadow-sensors when scanning a light sphere for the generation of a soft-shadow effect. Number of Shadow-Sensors - Determines the number of shadow-sensors used to generate a soft-shadow effect. See also: Parallel Light - Number of Shadow-Sensors No Shadows - The light source don't cast shadows. See also: Parallel Light - No Shadows. Other relevant settings for lamps in the light dialog: Lens Flares - Visible light and light reflections in the camera lens Photon Mapping - The photon emission parameters .topic 450 A new spotlight is created by operating the "Add Light Object" button in the light dialog. The parameters for a spotlight A spotlight has the same type of light dispersal as with a normal light bulb, except that in this case an angle of spread can be stated in which the light is visible. This creates a cone of light originating near a spotlamp. On the left of the dialog box are now shown three angle instruments. The first two again give the inclination and direction of the spot cone, as with parallel lights. The third instrument contains two arrows emerging from the spot, which show the divergence angle of the light - the cone angle. Between 1 and 180 degrees can be input for the emerging cone. Light Intensity - Maximum Range - A reduction in light intensity with distance is calculated. See also: Lamp - Light Intensity-Maximum Range Spot-Interpolation - You can influence the harshness at the edges of a spotlamp beam with the Spot-Interpolation parameter, avoiding an unrealistic extremely harsh transition at the beam edges. The higher the value, the greater the area over which the intensity is diffused. Examples in the illustration, from left to right: 0- Normal light-cone without diffusion of the intensity. 0.3- Over the last 30% the edge area shows a reduction in intensity for slow blending and soft spot edges. 1- The light intensity falls off from the center of the beam to the cone edge. Light Radius - If you enter a radius for a spotlight source it is possible to render this light object as a visible spherical light source (see also: Visible Light and Lens Flares). But the light radius is also required to determine the maximum deflection for shadow-sensors when scanning a light sphere for the generation of a soft-shadow effect. Number of Shadow-Sensors - Determines the number of shadow-sensors used to generate a soft-shadow effect. See also: Parallel Light - Number of Shadow-Sensors No Shadows - The light source don't cast shadows. See also: Parallel Light - No Shadows. Volumetric Spotlight Cone - Switch on this option to render a visible light-cone with realistic shadow casting from objects penetrating the cone. This light cone can only be rendered in raytracing-mode and the global button has to be switched on in the render options dialog, too. There you find some more parameters that define the global diffuse reflection and render resolution for this time consuming volumetric calculations. See also: Render Options - Volumetric Spotlight. Spotlight Alignment Besides the two alignment angles here in the light dialog you are also able to change the spot output direction in the Rotate Object work menu. There, the spot object is treated as an entirely normal object, which can be turned at will so that the direction of the spot cone is changed with the rotation. You can rotate the spot cone output direction here in the light dialog or in the object work menu - it is entirely up to you. Other relevant settings for the spotlight in the light dialog: Light-Mapping - How to use bitmaps to project colored light patterns Lens Flares - Visible light and light reflections in the camera lens Photon Mapping - The photon emission parameters .topic 460 Area light sources, as they are often used in modern architecture, e.g. in light panels, are hardly to simulate with standard point light sources. However, in CyberMotion you can convert any object you like into an area light source, just by activating the object property in the material dialog. With it, each point of an object will be interpreted as a small subordinated point light source contributing a little share to the whole of the objects light intensity. As a result the rendering time for the picture calculation - especially when rendering shadows - increases with the object's point resolution, since each point is included with a separate light- and shadow feeler in the illumination process. NURBS-patches are ideal area light objects because of their regular structure with evenly spaced points forming the surface. And apart from that you can change anytime the point resolution of NURBS-patches, e.g. to render faster preview pictures with low point resolution and then changing to a higher NURBS resolution for the final rendering. At least NURBS-patches do not have a thickness, so they can be easily installed in wall panellings. Only a maximum number of 200 light- and shadow sensors will be calculated for each area light object. If the object construction contains more points, then samples are picked randomly from the object. Adjusting the Light Properties for Area Lights - Once you have activated the attribute, the object will also be listed in the light dialog together with all standard light types. Then, if you change to the light dialog, you will be able to edit the light parameters for the area light object. You can define the light color, intensity and also the photon emission parameters. Of course, area lights are included in the photon mapping process and therefore area lights can emit photons just like all standard lights, too. The results will be even better than with direct lighting, because photons are emitted from the whole surface of an area light source, in contrast to the interpretation of a cluster of subordinated point light sources in the direct lighting algorithm. The light color of an area light is independently of the material color. Think of the object as an ordinary body or as a container for a light source. If the light is switched of or is shining very dim, you still have to take into account the light reflections from the container, if it is illuminated by other lights in the scene. Therefore the object material is calculated and interpreted as an ordinary object surface with all of its possibilities, e.g. bitmap textures, reflection or transparencies. Only then, the self-luminosity is added to the materials color with the light color. In an animation, for instance, you can animate the light color from dark to bright and the object will slowly begin to glow and illuminate the scene. Part of this interpretation is, that area light objects cast shadows, when they are illuminated by other, brighter light sources. You can switch of the shadow casting bei activating the material attribute for an area light. The parameters for area light objects in the light dialog Light Intensity - Maximum Range - A reduction in light intensity with distance is calculated. See also: Lamp - Light Intensity-Maximum Range Number of Shadow-Sensors - For standard light objects you can determine here the number of shadow-sensors used to generate a soft-shadow effect. As already mentioned above, in an area light object for each point of the object interpreted as a subordinated light source, an individual shadow-sensor is calculated automatically. Only a maximum number of 200 light- and shadow sensors will be calculated for each area light object. If the object construction contains more points, then samples are picked randomly from the object. Take care to hold the object construction as simple as possible, between about 8 to 200 points, since every additional shadow-sensor slows down the calculation considerably. No Shadows - The area light don't cast shadows. Two examples of area lights. In the picture on the left 4 NURBS patches converted to area lights illuminate the room. In the right picture the glass sphere was changed into an area light source. The picture on the left is rendered without Photon Mapping, since the light is scattered sufficiently by the many points of the 4 big light patches. The right picture was rendered with Photon Mapping for the indirect illumination. Just 25000 photons are enough to provide a soft warm ambient light for the scene. The soft shadows are achieved again by the shadow sensors directed to the points of the light sphere. Both demo files can be found in the projects folder under "...projects/arealights/AreaLights-NURBS.cmo" and "projects/arealights/AreaLights-AnalyticalSphere.cmo". Other relevant settings for area light objects in the light dialog: Photon Mapping - The photon emission parameters .topic 470 With the volumetric fire object almost all kinds of fire can be simulated, beginning with smooth burning candle flames up to vividly burning torches, camp fires or blazing seas of flames. Volumetric fire is confined to a cylindrical bounding box with an additional lamp object fixed to it. Volumetric fire objects are created within the light dialog. Since a lamp object is subordinated automatically to the fire cylinder, all parameters for lamp lights can be edited when a lamp belonging to a fire object is selected in the light dialog. Furthermore, in addition to the lamp details, all parameters forming the volumetric fire are displayed. Fundamentally, volumetric fire is calculated similar to volumetric fog, applying a ray marching algorithm that takes samples of the fire density along the path through the fire cylinder, so most of the parameters describing the fire are similar to those describing the volumetric fog. Additional parameters define the color palette of the fire, the shaping within the cylinder, the turbulent flow and the flickering (shifting of lamp position and intensity in an animation) of the flame. A new volumetric fire object is created by operating the "Add Light Object" button in the light dialog. If you operate the button, then first a little dialog is opened. In this dialog you can define the dimensions of the cylinder, that serves as a bounding volume for the fire object. An additional lamp object is created automatically with the cylinder and the lamp is fixed permanently in an hierarchically subordinated position to the cylinder. The Fire-Cylinder - The cylinder enclosing the fire is an analytical primitive and can be positioned, scaled, rotated and animated just like any other analytical primitive. You can also copy or delete fire objects or insert them into hierarchies. The lamp object belonging to a fire object will automatically follow all modifications. Fire cylinder and lamp object always form a pair, e.g., when you switch off the lamp object in the light dialog, the fire cylinder is switched off, too. The same goes the other way round, if you switch off the cylinder object in the select objects dialog. Depiction in the Preview Window - Like with Lens Flare effects, a change in the various parameter settings for a volumetric fire object will be indicated by a centered redraw of the fire in the preview window, if the preview mode "Lensflare - centered" is selected. The parameters for the generation of flames At the head of the parameter box you find again the already known settings for lamp objects. These parameters control the illumination attributes of the fire via the subordinated point light source: Light Intensity, Light Radius and Number of Shadow-Sensors See: Lamp - Point Light Source Under them you find all parameters influencing the shape, color and quality of the fire: Fire Palette - To edit the colors for a fire object simply click on the color range bar. It opens the color range editor where you can define your own colors or just load a pre-defined color range from the visual library. Special color palettes for fire are located in the fire folder of the color library. Note! The light emanating from a fire object is defined with the light color as with other light objects. The fire color palette only controls the color range used to draw the flames. Fire is created using a fractal algorithm, so most probably you will already recognize many of the following parameters from other functions in CyberMotion, like clouds, volumetric fog or landscape design. Scale - The underlying fractal patterns used in a fire calculation are rendered close together or wide apart depending on the scale value entered. The pictures above are borrowed from the chapter about volumetric fog, but both volumetric fire as well as volumetric fog are founded on the same mathematical algorithms and these pictures show very well how the scaling function works. The picture on the left was generated with a small scale value which results in a smooth random pattern stretched wide apart. This is the right setting for soft and steady candle flames. With a higher value for the scale function, the random pattern is rendered closer together with much more details and frequent gaps in the pattern. You can say, the higher the scale value, the wilder gets the fire. Clustering - Adds more detail to the fire by increasing the gaps in the random fractal pattern, similar to the "Thin Out" parameter of the volumetric fog function. As a result the fire becomes even wilder with the flames collapsing and flare up again frequently. Iteration - The number of iterations defines the level of detail for the fire pattern. A single iteration will result in a very blurred pattern. Further iterations will add new details to the fractal noise. Again two pictures from the volumetric fog section. The picture on the left shows a fractal pattern rendered from 2 iterations, while 4 iterations were used for the picture on the right. Again we can say, that a small number of iterations is suitable for soft shaped and smooth burning candle flames whereas more iterations are ideal for vividly burning flames with detailed, frayed outlines. Random - Initialises the random generator for the fractal algorithm. If several fire objects are in the scene, e.g. a number of candles on a candelabrum, then all candles should be initialised with a different random number, so that not all flames burn and dance with the same rhythm. Quality - In CyberMotion fire is calculated with a volumetric approach. This is done by tracing a viewing ray through the pillar of fire and taking many samples of fire densities along its path through the fire. With help of the quality parameter you define the intervals at which new samples are taken in the fire. With a higher quality value the step-width becomes shorter and the number of fire densities calculated in the fire cylinder increases considerably. If you render small candle flames or torch flames you can enter high quality values without hesitation, but if you want to produce a sea of flames where the viewing ray has to travel long distances through the fire, then better enter small values for the preview renderings and only for the final rendering increase the quality again to a value about 0.90 or higher. Turbulence - This parameter increases the turbulent flow in the fire movement. This applies both for the fractal pattern within the fire as well as for the outline of the flame that begins to flicker more wildly. Velocity - Determines the speed of the fire movement. The flames always move upwards along the positive y-object axis. Since the fire cylinder can be freely rotated you can apply the fire object also for other effects, e.g. a jet propulsion or a flaming tail of a comet. You just have to position and align the fire cylinder so that the positive y-object axis is directed opposite to the movement direction. This example shows a flaming meteor entering the atmosphere Flicker - This parameter affects the illumination attributes of the point light source belonging to the fire object. Dependant on the flicker value and the velocity of the fire movement, the lamp origin is shifted slightly back and forth in a turbulent current. Simultaneously the light color is changed in nuances and so the restlessness of the flames is transferred also to the illumination of the scene. This picture shows a still from a candle flame animation. In the running animation - you find the original project file under ".../projects/volumetricfire/candles_anim.cmo" - you can observe how the flickering of the candle flames is transferred to the illumination of the room. Fire Shaping The cylinder defines the bounding volume in which the fire is calculated. But you can apply another 8 basic shapes to further form the fire within the cylinder. For instance, you can apply an onion shape for small candle flames or a cone- or egg-shaped form for a camp fire. Another demo animation. The project file is also part of the CyberMotion installation under "...projects/volumetricfire/fire_logo_anim.cmo". Six individual overlapping fire cylinders form a continuous line of fire. In the animation the cylinders are elongated slowly from small discs until the flames engulf the CyberMotion logo. Then the camera moves forward and dives into the flames. Other relevant settings for the point light source subordinated to the fire cylinder: Lens Flares - Visible light and light reflections in the camera lens Photon Mapping - The photon emission parameters .topic 510 A simple but impressive effect combines light sources with bitmaps used as light filters. This allows you to simulate the most complex shadows by using fast bitmap operations. Imagine the horizontal strips of a window screen, or fences, complex window frames, rotating disco spotlights (using a spot type light source rotating in front of a multicolored bitmap), colored shadows of tiffany lamps or windows, etc. Interpretation of bitmap colors: White - completely transparent. Black - no transparency. Colored - filtering the light color. Light mapping is mainly a feature for spotlights, but can be applied in the same way to parallel lights. Light mapping is not applied to normal lamps as the uniform radiation of the point light gives no clue about the direction of the light map projection. - Activate this button to switch on light mapping. - Interpolates the bitmap colors, thus smoothing undesired step effects and pixelized transitions that become especially clearly visible when the picture is magnified by the shadow casting. Operate the button to display the fileselectorbox and then select a bitmap suitable as a light map. - Light maps can be repeated alongside and above each other. This function is optional for spot lamps. However for parallel lights this option is applied automatically, since there is no origin for this type of light and therefore neither can there be a single origin for a picture. There are some additional features that apply for spotlights only: - You can change the size of a projection by using the distance parameter. Imagine a slide picture you put in front of a flashlight. The closer you move the slide to the flash light, the larger becomes the projection of the picture on the wall and vice versa. Use the -Button to automatically adjust the distance of the bitmap from the light source, so that the bitmap fits exactly within the light cone. If the option is switched on and you select the button, then the distance of the bitmap from the light source is automatically adjusted, so that the picture lies completely within the light cone. Furthermore the function changes the light cone to a rectangular projection type, just like a real projector. Light rays that don't pass through the bitmap are simply left out in the rendering process. But be aware that the spot interpolation function is still calculated for this projection type, so that the light intensity falls off continuously from the center of the picture to the rectangle's edges if spot interpolation is not switched off. .topic 490 A picture from a real camera can contain faults due to camera optics. If, for example, the light from a light source shines directly into a camera, several lens flares appear in the camera's lens. This accounts for the well-known star, circle, or annular lens flares in the pictures. In the photo and film industry this effect is not always viewed with pleasure, and they often go to not inconsiderable expense to avoid this effect. However, people have become so accustomed to these picture faults that these effects you can help to considerably increase the degree of reality of computer-generated pictures. Here, in the light dialog, you can decide if such lens flares are to be generated for individual lamps or spotlamps. On the right side the light dialog, you have a large number of different parameters with which you can determine the type, intensity, and size of the flares for each light object. Furthermore, it also allows you to animate the light effects in various ways. By rapidly expanding light halos and rings and rotating the star-rays, you can simulate many effects - from the rising sun to light explosions. Apart from the buttons to switch on or off the lens flares for every individual light object there is also a global button in the render options dialog - which allows you to globally switch the lens flares of all light sources on or off. Visible Light If you switch on the option a spherical light halo is created about each light source. This halo consists of an inner circle defined by the Light Radius and an outer halo defined by the Halo Radius, where the color path of the light color fades towards the halo color. Simultaneously, the intensity of the halo reduces more and more towards the edge. If you also switch on the rainbow colored button, then a rainbow filter will be applied to the halo color range. The light radius is part of the basic light settings located on the left side of the dialog. If the halo radius entered here in the Lens Flares box is smaller than the light radius, then only the inner light circle is drawn without producing a halo effect. Intensity - Lens flares are additional intensities, added to the picture in post processing after finishing the picture calculation (exception - sun light objects are rendered as visible light spheres in the background as well as overlaid with the lens flare effects afterwards). The intensity parameter controls the intensity of the overlaid lens flares. Examples: On the left you see a visible light source with a white light color and a blue color for the halo. On the right you see a white light color and also white for the halo, but this time with the rainbow filter switched on. Visible, If Partly Covered - Normally, lens flares appear only if the light source is directly visible and not masked by other objects. For many purposes, however, it is very useful if the light source with the halo shines from behind objects, so that you can still see parts of the halo. In this event, the halo is not interpreted as a lens flare in the camera, but as a halo about the light source, as it seen in fog or rain, for example. This halo originates from the light reflections of the light-source through surrounding fine water droplets. In this picture are three light sources with identical parameters. The light source above in the middle is visible and so all light effects are valid. The light-source on the lower-left is partially visible. The option is switched on, and you can still see the halo surrounding the light source. The light effects that would originate through reflections within the camera lens, (i.e. light stars, rings or spots) are, however, not calculated for masked lights. With the light source on the lower right we have one more special case - it is behind a transparent object. In this event the lens flares are supplied but, additionally, the color and intensity of the light source is filtered through the transparent object. Spotlights and Visibility of Lens Flares - With spot-lamps, lens flares and light halos appear only if the camera is within the light cone thrown by the spot. If, on the other hand, a visible halo is also desired, if the spotlight cone is aimed past the camera, the option must be switched on. Light-stars and other effects, however, are only calculated if the camera is in spotlight cone. Visible Spotlight-Cones - In this chapter only lens-reflections and visible light-sources are discussed. CyberMotion can however also produce realistically rendered spotlight-cones. See also: Render Options - Volumetric Spotlight and Spotlight - Volumetric Spotlight Cone "Real" Light Objects - There are several ways to produce visible light objects in CyberMotion: With lens flares and light halos you can very easily add visible light effects, but in the end lens flares are only a simple post processing effect. Objects of any shape can be converted to light objects with real illumination attributes, but rendering of these area lights is very costly due to the manifold light- and shadow-sensors to evaluate. There is, incidentally, one more little trick with which you can generate a visible light source that combines wonderfully with the lens flare effects. Generate an object (for instance, a sphere) and choose a material color that matches with the light color. Then, set the material's attribute to the maximum of 1.00. Also, switch on the option as an object attribute in the material dialog. Then place a light source precisely in the center of this object. Finally, in the select objects dialog, arrange the light source in a hierarchically subordinated position to the new light object (simply drag the name of the light source on the name of the light object), so that the light source follows every movement of the light object. Here is an example of a light bulb with a spherically shaped light object containing a point light source in it and additional light effects. Lens Flare - Ring Annular lens flares are sometimes seen around bright light sources. You can simulate this light effect by switching on the option . Again, you can overlay the light ring with a rainbow filter when you select the rainbow colored button. Radius: The parameter allows you to edit the size of the radius of the halo ring. Width: The ring width is determined by this value. Lens Flare - Star The option creates a star-shaped lens flare for a light source. Number of Rays: The number of base ray-arms. A star can comprise of a minimum of 2 to a maximum of 40 rays. Iteration: The number of iterations by which additional rays are generated on top of the base rays. For example, if you input a value of 4 for the base rays, and a value of 2 for the iteration, then the end result is star of 4 base rays plus 4 supplementary rays in the first iteration run-through plus 8 supplementary rays in the second iteration run-through. This produces 16 rays in total. The rays added are somewhat thinner and shorter per iteration. The minimum iteration number is 0, maximum is 4. Ray-Length: The basic size of a single ray-arm. Ray-Width: The width of a ray-arm. A value of 0.01 corresponds to very narrow rays and a maximum value of 1 produces a very wide ray - in which event the individual rays are barely distinguishable. Rotate: The angle of the first ray of the star from the vertical. With a low or odd basic ray number (3,4, or 5) it often appears better and more realistic if you give the star a slight rotation, so that it is somewhat out of alignment with the edges of the picture. Another beautiful effect is also possible: a star with only 2 rays and no iteration, rotated through 90 degrees, lies exactly horizontal. You have probably seen this type of light effect already in many films. The angle value can be animated. By the inputting different angles in keyframes, you can produce a rotation of the star effect in animations. Random: The random-function generates an asymmetrical star halo. The random generator can be initialized with a value between 0.01 and 1 for different random results. Examples: Number of rays: 4, iteration: 1, width: 0.08. Number of rays: 4, iteration: 2, width: 0.35, length: 85. Number of rays: 4, iteration: 3, width: 0.70, length: 85. Number of rays: 2, iteration: 0, width: 0.04, length: 85, rotation: 90. Light radius: 8, halo radius: 15. Light ring: radius: 18, width: 3 Lens flare - spots This type of lens flare generates multicolored rings and circles running out from the light source, diagonally through the focus of the picture. In films, this effect is often seen if, during filming in the countryside, the camera, in panning, catches the sun and lets these spots run through the picture. You can also use this light effect very effectively in animations. With the three ring-type buttons you can again choose to overlay the rings with a rainbow filter or not. If you select the third button in the row, then both types of rings will be mixed. Number: This value gives the number of spots to be calculated. Size: The maximum size of a spot. The actual size depends on the random generator and, as with all other light effects, also on the distance of the light source from the camera. Intensity: The maximum intensity of a light ring from 0.01 to 1. Additionally, the intensity of the color of the light-ring is determined by the colors of the light and halo and is dependant on a random value. Random: The starting value for the random generator. Changing the initialization value can generate many different variations in the light effect. Global Scale Once the general shape has been edited via the various effect parameters you can resize the resulting lens flare effect with the Global Scale parameter as a whole object. .topic 480 In the "Global Illumination - Photon Mapping"-box in the light dialog the parameters for the photon emission are entered. For a detailed introduction to photon mapping see: Photon Mapping - Introduction and examples Render Options - Global Illumination - Raytracing + Photon Mapping Material Dialog - Photon mapping object properties Each light source in CyberMotion can emit photons for the photon mapping process. However, you can also exclude individual light objects from this process. If you switch on the option in the light dialog, then, instead of emitting photons, the corresponding light object will illuminate the scene only with conventional direct light algorithms, no matter which rendering mode is activated. You can use this function for little lamps in instrument controls or for lights far away in the background or, e.g., for spot lamps illuminating only small parts of the scene. To save rendering time just switch on this function for all light objects that do not contribute much to the general illumination in the scene. Emit Photons from Both Sides of a Facet - This button is only applicable for area light objects. Area lights emit photons from all of the facets building the surface of the object. However, if the light object is formed from a closed shape, e.g. a sphere or a cube, it is not necessary to emit photons from the inner side of the light object. Therefore, as a standard, photons are emitted only in direction of the surface normals. (The surface normals of closed objects created in CyberMotion always show outwards). However, if you want to use open objects or flat surfaces without thickness as light objects, then simply switch on the . Another possibility is to apply flat NURBS-patches for light panellings. To prevent the emission of superflous photons backwards into the walls, again switch off the emission of photons from both sides of a facet. With the menu function "View - Normals" you can include the depiction of normals in the viewport windows. Then, if the normals of a NURBS-patch are faced towards the wall, simply rotate the NURBS-panel by 180 degrees. Number of Photons - For each light source you can enter an individual number of photons to emit for the photon tracing. Numbers between 20,000 up to 10,000,000 are practical, depending on the complexity of the scene and the rendering mode. When applying a photon map only for indirect illumination then you can manage with relative small photon maps. To estimate the general area brightness in a small room you can do with photon maps of about 50,000 photons. However, if you apply photon mapping as a global illumination model without direct lighting then usually a million and more photons are needed to cover all details in the scene. Furthermore, you have to enter a corresponding high number of photons to gather for the photon pool to get smooth intensity transitions and to prevent a spotty appearance (about 600 up to 2500 and more, depending on the size of the photon map). This requires a fast cpu and a high memory capacity of at least 256mb and more. Intensity Correction - For each light source a light intensity can be entered via the "Light Intensity - Maximum Range" parameter. This light intensity fits best to the direct light algorithm that is used in conventional rendering modes but since photon mapping applies a totaly different illumination approach based on a more physical model, an intensity correction value is needed to match the light intensities when switching back and forth between simple raytracing and photon tracing. If you plan to render your scene with the photon mapping algorithm you ought to follow this course: Adjust the light intensities with normal raytracing for the preview renderings. Then render a first test picture in photon mapping mode. To balance the intensity variations in the picture caused by the two different illumination methods, adjust the light intensity for the photon emission via the intensity correction parameters instead of changing the general "Light Intensity - Maximum Range" parameter. Thus you can switch back to raytracing mode for faster previews when extending and editing your scene and light settings and in the end you can turn again to photon tracing without having to adjust again the light intensities for the photon mapping process. Caustics Caustics are light reflecions from highly specular surfaces or, e.g., the light gathered in a focal point after transmission through a glass lense. Example for caustic light reflections beneath a little glass figurine, caused by photons that were refracted when transmitting through the glass. The corresponding project file is part of the CyberMotion installation under "/projects/caustics/ant.cmo". Caustics Photons are stored in a separate Photon Map, the so-called Caustics Map: Usually a photon map is evaluated only for the indirect illumination in combination with direct light for the main illumination and shadow calculations. For this purpose it will do to emit only several ten thousands of photons into the scene so we can average the general area brightness at each point in the scene. Caustic reflections, on the other hand, are often sharp outlined light patterns, like in our figurine picture shown above. It would be impossible to render these light reflections when only a few photons had been scattered around - you wouldn't even see a glimpse of the light focused beneath the figurine. Now, it would be also ridiculous to emit millions of photons into the scene and having to evaluate huge photon maps afterwards, only to cover the light reflections of a little specular object somewhere in the scene. That's why we have to manage to different photon maps, one for the global photon map and the general illumination, and one separate caustic map only for those photons that have been reflected or transmitted via a specular surface before hitting a diffuse surface. The caustic map is build in a second photon tracing pass where additional caustic photons are aimed only towards such objects, that are highly specular or transparent and own the material attribute . The evaluation of the two different photon maps requires also separate photon pools. For the global photon pool much more photons have to be gathered for the averaging process, so that soft and clean light transitions can be calculated for the area brightness. However, for the caustics pool we need comparatively fewer photons, because we want sharp and clearly visible light reflections. The caustics parameters in the light dialog: Caustics - Aim additional photons at objects that cast caustic reflections - The emission of additional caustic photons can be switched on or off for each light object separately. You should activate the caustic photon emission only for lights that stand nearby or are directed towards objects, that own the object attribute . If you want to render a picture with the camera focused on a caustics object as the main part of the image, it is advisable to use a spot lamp for the illumination because then the stream of photons can be aimed directly towards the target object. n-Times more Photons - Determines the multiple of additionaly emitted photons to create the caustics effect. Instead of specifying a certain number of photons this time you just have to enter a factor that describes how much more photons per area have to be emitted towards the caustic objects than for the global photons. Take again the glass figurine as an example. The emission of global photons was set to 50,000 photons. For the emission of caustic photons the factor was set to the maximum value of 100 via the -Parameter. During the processing of the photon map in the first pass, when the 50,000 global photons are emitted, about 1100 photons found their way through the glass figure and were saved in the caustic map. Then the additional emission of caustic photons is started with 100-times more photons, this is 100 * 50,000 = 5 millions photons. But this is only a fictitious value that only specifies how much more photons per area are emitted in general. Since caustic photons are only directed towards caustic objects in the end "only" 200,000 photons find their way into the caustics map. This is more than enough for a sharp representation of the caustic light effects under the glass figure. .topic 170 How to put a picture or a simple color range in the background on rendering or rather generate a complex atmosphere with fog, clouds and a colorful sky. The Background Dialog Structure of the dialog, preview options and the background library The individual background models : Simple Color Range Draw a simple color range in the background Atmosphere How to simulate a complex atmosphere Sky Colors - Two different approaches for rendering sky graduations Atmospheric Filter - Why turns the sky red at sunset? Clouds - Adding cloud layers to the atmosphere Fog - Atmospheric fog and volumetric ground fog Rainbow - The phenomenom of caustic light reflections in the atmosphere Rain/Snow/Floating Particles - At the touch of a button it begins to rain or snow Stars - Use the starfield as part of the background model or alternatively as an animated starfield for space travel. Background Bitmap Copy a bitmap in the background .topic 48 - Menu "Options - Background" - Short Cut: + "B". In the background dialog you can stipulate a color gradient in the background on rendering, or a complex atmospheric sky model with or without clouds and fog, or simply a bitmap. The background is managed as an object with the following attributes: 1. The background can be switched on or off here in the background dialog via the button or just like other objects in the select objects dialog by de-selecting the background object there. If the background is switched off, it simply appears in black. 2. The background can be animated. The background object can be worked on in the animation dialog in exactly the same manner as all other objects, e.g. copy, delete or insert new keyframes. Instead of object movements, however, the background colors, cloud and fog parameters are animated. The Preview Window The preview window in the central part of the dialog provides a quick preview of the scene when changing background settings. There is a selector box underneath the preview window providing several preview modes: Panorama, no objects - Only the background is rendered, as seen from a predefined panoramic camera view. The panoramic view is ideal for adjusting parameters of landscape scenes with atmospheric backgrounds. Panorama, only planes - In addition to the background all plane objects in the scene are displayed. Most projects contain a plane to clip the scene downwards and towards the horizon. If not, and an atmospheric background without fog is applied, the background will simply be mirrored at the horizon. Panorama, complete scene - The complete scene is rendered into the preview window. Camera - no objects, only planes and complete scene - the same as the panoramic view but except this time the current camera settings are used to render the scene. Copy Panoramic Camera Settings to Current Scene Camera You can copy the settings for the standard panoramic view to the current scene camera by operating the Camera> button. Add Plane Object If you have forgotten to create a plane you don't need to leave the dialog and change into the Create Plane dialog. Simply operate the button to generate a new plane for your scene. This button is only available if no plane as yet exists in the scene. Visual Background Library On the right side of the dialog the visual background library is located. Double click on a thumbnail picture to load an existing background file and modify it to your needs or add your own backgrounds to the library using the save function. All types of backgrounds can be saved, although it would be unreasonable to save a simple color range background, for instance, since color range files can be saved separately in the color range library anyway. See also: General library functions to save, load or delete entries or to create sub-folders. In addition to the general library functions that are applicable to all libraries in CyberMotion, there are some special settings for the saving and loading of background files: Load - replace sun settings - In addition to the background parameters a background library file also saves the settings for activated sun light objects, since the appearance of a cloud formation depends strongly on the light incidence of the sun. You can decide via the , if you want to overwrite the current settings for the sun with the data from the library file or not. This setting effects only activated sun lights, all other light types including parallel light sources with the option switched off will not be saved or replaced by a background file. Save - includes animation data - If this button is selected the background object will be saved including all keyframes related to it (if animated). Otherwise only the parameters of the current keyframe position are saved - resulting in the background as it is presented in the preview window. Select a Background Model In the upper left of the dialog there is a selector box in which you can choose the type of background model you want to apply. Depending on the selected background the relevant parameters appear on the left side of the dialog: Simple Color Range Atmoshere - Including background colors, filters, clouds, fog, stars, snow, rain... Background Bitmap .topic 530 If you select Color Range in the background select box, an angle instrument appears directly beneath the select box, similar to those in the light or camera menu. It indicates the direction of the color graduations. Simply click in the instrument and drag the pointer to the desired position. To edit the colors for the color range simply click on the color range bar beneath the angle instrument. It opens the color range editor where you can load color ranges from a visual library or define your own. Example of a color range shining through two transparent objects The following should be noted concerning the simple color range: Only a simple 2-dimensional color path is calculated for the background of the picture. No 3-dimensional background effects are applicable so you cannot expect the background to change if, for example, you move the camera towards it. Nor can an object cast a shadow on the color path. However, you can see the color range through transparent objects but, as there is no real 3-dimensional background, no distortion due to refraction will be seen through transparent materials. A 3-dimensional background model (such as the atmosphere model) would be appropriate if the background is to be rendered with refraction seen through a transparent object - for instance, a glass sphere. .topic 540 The atmosphere of a planet is a complex and multi-layered matter. It provides the oxygen to breathe, the ozone layer that protects us from the hard ultraviolet radiation, transports essential humidity in its clouds and conjures up the most wonderful colors and cloud formations in the sky. Now, with CyberMotion you can catch some of this enormous variety for your picture compositions. Choose the "Atmosphere" entry in the background select box and the seven sets of parameters for the creation of an atmosphere will step to the fore. All effects can be freely combined or switched off, just as you like. For instance, if you choose to add the starfield to a cloudy atmosphere, it will be automatically filtered by the clouds and the fog. But if only the starfield is activated, without clouds and fog, then you can use it as a background for scenes in outer space. Sky Colors Two different approaches for rendering sky graduations Atmospheric Filter Why turns the sky red at sunset? Clouds Adding cloud layers to the atmosphere Fog Atmospheric fog and volumetric ground fog Rainbow The phenomenom of caustic light reflections in the atmosphere Rain, Snow and Floating Particles At the touch of a button it begins to rain or snow Starfield Use the starfield as part of the background model or alternatively as an animated starfield for space travel. .topic 550 Choose the tab in the atmosphere selection to insert the parameters for the sky colors. Edit Color Palette - To edit the colors for the sky simply click on the color range bar. It opens the color range editor where you can define your own colors or just load a pre-defined color range from the visual library. Color Range Mode The color range of a 3-dimensional sky is generated over a large sphere surrounding the 3D area. This technique provides a true 3-dimensional color range with the attribute that the sky/horizon moves with the camera-movement - as does a real sky/horizon. Also, like any other object, the sky is mirrored in reflective objects and is correctly depicted through refractive, transparent materials. There are 2 different sky modes to render the start- and end point of the color range: From Zenith to Horizon - The color range graduates from the start color at the zenith, through the color range to the horizon. Sun-Centered - The color range graduates from the center of the sun over the complete sky sphere towards the opposite side of the sphere. This method has a big advantage when rendering animations with camera pan shots. If you define a color palette for a sunset, starting with a bright color graduating to very dark blue colors in the end, this color range will cover the whole sphere and is rendered correctly from every camera position. The area surrounding the sun will always be bright and if you turn round the other side of the horizon will be rendered automatically with the dark colors of the color range. Of course, there has to be a parallel light object with activated sun mode, otherwise the colors will graduate automatically from zenith to horizon again. Both color range modes have their pros and cons. For daylight shots with the sun standing high above a color range starting from a blue tone and graduating from zenith to horizon to a very bright blue or white color is often the best solution. For colorful sunsets with bright areas around the sun and dark areas on the opposite side of the horizon the sun-centered option fits better. Only for the color range from zenith to horizon there are two additional parameters to lighten up the area around the sun. In principle, this is a combination of the two models described above. Color distortion near sun - If a parallel light object is acting as a sun the color range is again calculated from zenith to horizon but an additional filter function also distorts the color range around the sun. To include this effect the button underneath the color range bar has to be activated. There are two additional parameters: Area - Defines the area of the sky around the sun that is within the distortion radius. Strength - The strength of the filter function. The illustration above shows a sky with a color range starting from blue in the zenith graduating to a very light blue at the horizon. For the left picture no color distortion has been calculated and therefore it resembles more a cool moon standing in the sky than a bright sun. The right picture shows the difference. With activated sun distortion the color range graduates around the sun as well to the horizon, creating the impression of a glaring firmament. .topic 580 Choose the "Atmosphere" - tab to bring the parameters for the atmospheric color filters to the fore. As a light ray traverses an atmosphere some light is extinguished and some light may be added by emission and scattering. This results in a change of color with distance, i.e. dark backgrounds becoming bluer and light ones becoming redder with increasing distance. Note that the atmospheric color filters build on the atmospheric fog effect - if the atmospheric fog is switched off then the color filters will also have no effect! Additive - With increasing distance a blue component is added to the scene colors. Particularly daylight renderings of mountain sceneries profit from this effect because only with it does the picture have a real impression of depth and distance. Click on the color button if you want to edit the additive color component. Filter - Light colors, i.e. clouds, snow or fog, are filtered with this color with increasing distance. With this filter you can simulate the reddening effect when the sun sets, but you can also apply this filter for daylight scenes, for instance, on cold winter days when even the midday sun stands low at the horizon and the sky graduates from a deep blue to a very bright color, dipped with a trace of violett or orange at the horizon. Strength - Controls the intensity of the filter effects. Again note, that both filters are connected directly to the atmospheric fog and so the filter effects will increase or weaken also with the density of the fog. .topic 560 Choose the tab in the atmosphere selection to edit the parameters for cloud formations. Clouds - If you select the button, clouds are simulated in addition to the sky model. Clouds can be influenced considerably with the parameters in the dialog. Of course you can simply load an existing file from the library and use it as a starting point to create your own cloud-filled skies. Example: bright sky with pretty cumulus clouds (The settings shown above in the dialog picture were used for this picture). Add Cloud Layer A sky with clouds will contain at least one cloud layer. For complex cloud formations up to 3 additional cloud layers can be added. You can select the layer you want to edit in the "Layer" selector box at the top of the dialog. Next to the selector box are two buttons for adding new layers or deleting existing ones. With the "Sky & Clouds" background mode an additional button appears beneath the preview window. If activated only the current selected cloud layer will be drawn in the preview to ease the adjustment of this layer. Sunset with 2 cloud layers. A somewhat lower and therefore more darkly-shaded cloud layer and a second cloud layer high up in the sky still illuminated by the low sun. Cloud Color and Brightness - In the Cloud Color box you can specify a basic color for the clouds. If a sun light object is activated then the resulting cloud color will be calculated from this cloud color, the illumination of the sun light and the ambient cloud brightness (the last parameter in the upper box of the dialog). Random - This initializes the random parameter used to generate the clouds prior to picture rendering. Each new value creates a completely new cloud field. Height - The height of the cloud layer. Accumulation - The smaller the value, the greater are the gaps between cloud formations. Density and Transparency - For puffy cumulus clouds use a somewhat higher density value and no transparency while for thin cirrus clouds low density and high transparency values are recommend. Turbulence - Turbulent flows influence the cloud formation Crispness - A low value results in smooth rounded clouds. With a greater value the clouds become crispier and more detailed. Volume and Contours - This two parameters control the 3D-effect caused by the illumination of the sun. The greater the Volume and Contours parameters the more clearly the cloud contours bulge out. For a thin cirrus cloud layer these values should be rather low. Animate Clouds A flow of the clouds can be switched on separately for each cloud layer. The angle instrument controls the Wind Direction and the Velocity parameter determines the speed of the cloud flow. If the Turbulent option is switched on then a turbulent flow with continuous changing cloud formations is calculated. Basically all background parameters can be animated except for the Random and Accumulation parameters that define the basic random field for the cloud formation. For instance, you can stipulate in one keyframe a very low density together with high transparency and then in a following keyframe a higher density with less transparency. In the final animation a cloud bank would appear virtually out of nowhere. Similarly, you can define different color ranges at different keyframe positions to merge, for instance, a golden sunrise background into a bright blue sky. Condensation Trails Besides CO2, the combustion engine of an aeroplane also exhausts ordinary water vapour. A visible condensation trail arises, when this steam is freezing almost instantly in the higher levels of the atmosphere. In CyberMotion condensation trails can be build from normal cloud layers that are overlayed with an additional stripe mask. The position, the width and the length of the trail can be specified with the corresponding parameters at the bottom of the clouds page. The orientation of the trail is controled by the wind direction (The trail lies at right angles to the wind direction so that it floates with the wind if the -option is switched on). With all those parameters you can freely arrange several condensation trails in different heights and different orientations in the atmosphere. As mentioned above, condensation trails are build from normal cloud layers, so all cloud parameters effect also the appearance of condensation trails. If, for instance, the accumulation or density parameters are too low, then often gaps appear in the course of the trail. On the other hand, depending on the influence of the weather, condensation trails may begin to dissolve or to mix with other air currents, so you can apply this effect intentionally. The cirrus-rolls of this marvelous cloud formation consist of only 4 condensation trails positioned one after the other. .topic 570 The fog-parameters of the atmospheric background model Don't be startled by the variety of parameters presented for the fog functions. If you just want to use "normal" atmospheric fog only the few parameters in the top box are necessary. All of the rest is relevant for the ground fog only, which, because of his cloud like attributes, also presents a corresponding choice of adjustments. Atmospheric fog and ground fog can be applied separately or both in combination. Atmospheric Fog Fog - Switch the button on to include an atmospheric fog effect in your scene. Atmospheric stands for a veil of mist and fog that increases exponential with the distance and, starting from ground level, gets constantly thinner with increasing height, like in a real atmosphere. The atmospheric fog effect - together with the atmospheric color filter functions - is indispensable for all realistic outdoor scenes because only with it does the picture have a real impression of depth and distance and smooth transitions between sky and horizon. Fog Color - To edit the fog color just click on the corresponding color button. You can enter a simple gray or white or just any color you like, e.g. a bright orange tone for sunset effects. Illuminate - If this option is switched on, then automatically an appropriate fog intensity is calculated from the fog color and the light settings: Atmospheric Fog - Only the irradiance coming from sun lights is taken into account. Ground Fog - If the ground fog is rendered volumetrically, then the fog will be illuminated by all types of light sources, e.g. a lamp casting a visible light halo or a spot penetrating the mist with a visible light cone. Do not mix this up with the simple light effects or the visible spot cone effect. If volumetric ground fog is activated in combination with the option, then a real shading of the fog media is calculated. Dichte - Controls the density of the fog. Ground Height - The maximum density of the fog is at ground height and the density decreases with increasing height. If the ground level of your plane object is different from zero, you can enter here the corresponding value of the y-position of the plane object. Beneath this ground height the density of the fog will be calculated with a constant maximum density. If you click on the button the ground height will be automatically set to the lowest object height in your scene. Height - Above ground height the fog density decreases until the maximum fog height is reached. You can use the fog height parameter very effective for mountain peaks, where the massif is vanishing in the distance but the peak remains clearly visible above the fog bank. But the maximum height of the fog is also relevant for the transition of the clouds and the fog at horizon level. If you want to render outdoor scenes on clear and sunny days with clouds of a high visibility a low maximum fog height is appropriate, whereas on foggy days with barely visible cloud formations a corresponding higher value for the fog height ought to be entered. Ground Fog The atmospheric veil of mist and fog reaching high up in the sky is indispensable for the rendering of realistic atmospheres, but we still need an additional layer of fog to simulate heavy fog limited to the ground area. The basic parameters for atmospheric fog and ground fog are much the same, with only one exception: The decrease of the fog density with increasing height is optional. If the button is switched off, the density of ground fog is constant and independent from the height. This is the right setting for low and dense fog banks on the ground. But if you want hills and peaks slowly emerging from a dense ground fog, then switch on again the button to thin out the fog with increasing height. Example of volumetric ground fog above a mountain range. You can download the corresponding animated demo file from the internet library. Volumetric Ground Fog - If you switch on the button, then the ground fog layer will be rendered volumetrically. This is done by tracing the viewing ray through the foggy media and taking many samples of fog densities along its path. If also the option is activated, then together with the density calculations at every sample point a shading routine will be carried out to determine the illumination on the spot. This is of course very time consuming but also very effective. Volumetric fog is based on similar routines as cloud formations and correspondingly a lot of parameters are presented to control the appearance of ground fog, starting from a uniform media up to fluffy fog banks. Finally, you can even animate volumetric fog to let it rise from the ground and flow with the wind direction. The picture above shows the shading capabilities of volumetric fog. A spotlight above the fog bank and a green lamp light directly in the fog illuminate the surrounding fog media. You can find the original demo file in "...projects/volumetricfog/fog_illumination.cmo". The parameters: Random - This initializes the random parameter used to generate the fluffy fog formations. Each new value creates a completely new fog field. Quality - While the viewing ray is traced through the scene, many samples of fog densities and illumination values are calculated along its path through the fog media. With help of the quality parameter you determine the intervals at which new samples are taken. With a higher quality value the step-width becomes shorter and the number of fog densities to be calculated for the general density estimation increases considerably. If also fog illumination has been switched on, the rendering time gets even higher. Therefore, it is advisable to set a very low quality (0.5 to 0.85) for preview renderings when working on the project and only for the final pictures set a relative high quality of about 0.95 to 1.00. Scale - The underlying fractal patterns used in a volumetric fog calculation are rendered close together or wide apart depending on the scale value entered. Two pictures shot from a position vertically above the ground fog with only a simple black plane object underneath to contrast with the fog. The picture on the left was generated with a small scale value which results in a smooth random pattern stretched wide apart. With a higher value for the scale function, the random pattern is rendered closer together with much more details and frequent gaps in the pattern. Iteration - The number of iterations defines the level of detail for the underlying fractal pattern. A single iteration will result in a very blurred pattern. Further iterations will add new details to the fractal noise. The picture on the left was rendered with only 2 iterations, whereas 4 iterations were used for the picture on the right. It might be a good idea to reduce the number of iterations for faster preview renderings, too. Thin Out - Adds more detail to the fog by increasing the gaps in the random fractal pattern. The pictures above demonstrate this effect. On the left a fog bank rendered with a value of 0.5 and on the right side the same fog pattern cleared up further with a "Thin Out" value of 0.9. Turbulence - This parameter increases the turbulent flow in the fog, especially when the fog is animated. Diffuse - Modifies the shading calculations of illuminated fog, if the option is activated. The higher the amount of diffuse reflection, the more the fog changes from a light hazy mist to dense clouds of smoke. A ground fog pattern shown above on the left and clouds of smoke depicted in the right picture. With the exception of the diffuse parameter, 0.20 for the left and 0.80 for the right picture, both images show the same fog pattern. Animating Volumetric Fog Fog patches drifting smoothly about a mountain pass or a marshland scene with vapour rising in the hot morning sun - to render such scenes you just have to switch on the button and to determine the movement direction and velocity of the fog. You can select one of the following three movement types: Rising - The fog rises from the ground and vanishes in the height. Wind Direction - The fog moves with the wind. The wind direction can be set on the clouds-side of the atmosphere selection. Rising & Wind - The fog rises from the ground and simultaneously moves with the wind. Velocity - Because we do not want the fog to chase with the speed of the wind above the ground, you can specify here a separate velocity for the fog movement. .topic 680 The rainbow - sun rays are refracted when entering and leaving a rain drop. Simultaneously a total reflection occurs at the backside of the drop. That's why the sun always stands behind the observer and the rainbow appears as a circle around the straight line from the sun to the observer to the midpoint of the rainbow circle (a rainbow cone of 42° opens in front of the observer). Because of the reflected light the inner area of a rainbow appears brighter than the area outside of the rainbow (Caustics). Additional reflections inside the rain drops may produce additional rainbows around the main bow but with rapidly fading intensities. In CyberMotion this physical model is adopted for the most part. If you switch on the rainbow effect, then not only simply color circles are drawn in the background. Instead the visibility of the rainbow depends on the position and point of view of the camera and the position of the sun. Therefore, if you want a rainbow to appear in the picture the sun has to shine from behind the camera. The best thing to do would be to set the camera zoom to a low value in order to locate the rainbow in the sky. Then, adjust the rainbow position by correcting the angle of light incidence for the sun and finally zoom in again with the camera. If you are lucky you can observe the real circular nature of a rainbow when looking out of a flying aeroplane. Back on the ground the lower part of the rainbow will be hidden again by the earth. Example rendering with a bright main rainbow and a dim secondary rainbow. The caustics effect was switched on to lighten the inner are of the rainbow. Furthermore the height was limited, so that the rainbow is fading with increasing height. If a complete arc can be seen depends among other things on the height and expansion of the rain clouds. The parameters: Intensity - controls the intensity of the main- and secondary rainbow colors. Since the intensity of the secondary rainbow is always falling off against the intensity of the main rainbow, it is only calculated as a fraction of the intensity of the main rainbow. Caustics - defines the brightness of the caustic reflections within the rainbow. Height° - limits the height of the rainbow in dependence on the maximum opening angle of the rainbow (main bow 42°, secondary bow 51°) .topic 690 On the "Rain/Snow..." page of the atmospheric background model you can switch on a particle effect to simulate rain (stripes) or snowflakes or, e.g., floating particles in the water. The particles are generated after the rendering of the picture in post processing mode. The illustration above shows some details from a small demo animation you can admire in the internet gallery at www.3d-designer.com. Snowfall is setting in - covering the landscape with a pretty white coat. To achieve this effect also material animation of the terrain texture layers was applied. The parameters: Initialize Standard Settings With these 3 buttons you can initialize the settings for the particle effect to one of 3 standard types. The first button retrieves the parameters for rainfall, the second button will set the parameters for snowfall and the last one sets the standard parameters for a floating particle effect. Particle Types There are two basic types of particles - half transparent stripes for rain and a somewhat rounded shape for snow and floating particles. Example for rain particles, rendered by Pascal Heußner. Number of Particles - The number of generated particles for the weather effect. This number can also be zero. This is useful, if you want to start the effect at a later time in the animation. Example: You want the rain to set in at frameposition 100 with a few rain drops. On the following 100 frames up to frameposition 200 the rain becomes stronger. Then it rains with constant intensity up to frameposition 400 and finally the rain decreases again to zero raindrops on framepostion 500. To realize this animation you simply have to move on the corresponding frame positions and adjust the number of particles for the weather effect. Each time you change one of the parameters that can be animated in the background dialog (indicated by an emphasized background color of the edit field), automatically a new parameter-keyframe is generated for the animation. The settings could be as follows: Frame 1, Particles 0 - No rain, yet. Frame 100, Particles 0 - The rain will begin to start here. Since no changes of the parameters have been made on this frameposition, we have to manually add a new key in the animation editor. Simply select the background object in the animation editor and add a parameter track and then operate . Frame 200, Particles 10000 - The particle stream increases from 0 particles in frame 100 to 10000 particles in frame 200. Frame 400, Particles 10000 - Between frameposition 200 and 400 a constant stream of 100000 particles is generated. Again, no change is made in the background dialog, so just add a key in the animation editor for the parameter track of the background object. This keyframe marks the point, from which the particle stream decreases again. Frame 500, Particles 0 - The rain stops. Intensity and Transparency - This parameters control the intensity and transparency of the particles. Of course rain stripes appear more transparent and less intense than shiny snow flakes. But a high transparency value can also be useful for the rounded particle type, for instance, for floating underwater particles. Or think of somewhat hazy weather situation, where the snow flakes become blurred with the background. Velocity - controls the rate of fall for the particles. Wind and Turbulence - The speed of the wind is combined with the rate of fall. The wind direction is given by the wind direction of the first cloud layer defined on the clouds page of the atmospheric background model. Turbulence adds some chaos to the particle stream which is especially useful for the snowfall. Rendering of an Preview Animation In contrast to the particle systems based on real 3D-objects, the weather particles are rendered very fast in post processing, after the rendering of a picture. It is recommendable therefore, to switch off all unnecessary scenery objects, including clouds and other time consuming effects, and to render complete preview animations only of the weather effect. Test and change all parameters accordingly, especially the number of particles, velocity, wind and turbulence, before you switch on again all scenery objects and effects for the final rendering. .topic 73 With help of the starfield generator it is possible to create a starry sky that you can even animate. The starfield is a background model that can be combined with the other atmospheric effects. Thus it is possible, for example, to combine stars with a clouded sky. In this case the stars will be filtered by the clouds and fog and not simply drawn above the calculated cloud cover. No Atmosphere If you want to create a simple starry background for a scene in outer space, without atmospheric effects like clouds and fog, then you can switch off all atmospheric effects at the same time when you operate the button. With it, the background color will also change to a simple black. The Starfield The starfield is a genuine 3D-starfield and not simple a 2D drawing that is put into the background. The stars will be scattered around the entire 3D-space (not all drawn in the picture area visible), so do not be surprised if you have entered 10,000 stars for a particular type of stars and you can only see a few hundred in the picture. The advantages of a 3D-starfield are evident: Real camera and zoom effects - The visible starfield moves along with the camera movements and you can both zoom in and out on the stars. The stars (being genuine 3D-objects with coordinates) can, of course, be animated. Parameters There are 4 basic types of stars you could create with the generator. Use the check box beside each star type image to switch this kind of star on or off. For each of the different types of stars you can choose the number of stars to generate, the basic color, and the color deviation that is randomly determined for each star. The intensity parameter controls the transparency for the stars. This parameter can be animated, so you can produce a proper day (intensity 0) to night transition (intensity 1) where the stars become more intense as night falls. The "Random"-parameter at the bottom of the dialog initializes the random generator to produce different appearances of the starfield. Moving Starfield Use the option if you want to animate the starfield for a film sequence. As a result the stars will move with a given speed, adjustable with the -parameter, in the direction of the camera. This is similar to the effect you are already familiar with from corresponding screensavers. Here, however, camera movements and camera zoom are calculated in addition to the star movements. So, if the camera moves to the right, then the starfield will move to the left and vice versa. More distant stars will be calculated fainter than the nearer ones and will become brighter as they approach. The same applies to the size of the small suns (star type 4). Finally switch on motion blur and you will achieve the ultimate star-flight effect. .topic 590 Choose the Bitmap entry in the background select box if you want to copy a picture in the background. Operate the button to display the file select box and then select a bitmap suitable for your project. The bitmap is automatically scaled to the size of the picture you are rendering. The picture size can be adjusted in the render options dialog. Of course, the best results are obtained when background and the rendered picture resolution are the same or at least have the same proportion, otherwise the picture will appear stretched in one direction. .topic 180 Learn the basics to produce your own 3D-films. Animation Introduction, animation possibilities and navigation button-strip Introduction Modeling Mode versus Animation Mode Animation tracks and keyframes Track types The animation button-strip Navigation in time Playing preview animations in the viewport windows Creating keyframes manually with the record function Forcing key creation for several tracks and object hierarchies Animation possibilities for the different object types What is hierarchy-independant animation? How to animate materials, light settings, the camera or the background The Animation Editor Edit tracks, keyframes and animation parameters The timeline window Selecting a frame or a range of frames Selecting objects and tracks Undo or redo actions Add or delete tracks Add or delete keyframes Delete selection Inserting frames between keyframes Moving a range of frames Switching objects off or on again during an animation Moving objects on curved B-Spline-paths Acceleration and deceleration Cut, copy and paste a frame zone Copying absolute positions and angles or a relative movement pattern Duration of an animation Playing speed of an animation Part-render an animation Depiction of movement paths Motion Blur Start the rendering of an animation See also: Tutorial - Simple Animation Tutorial - Animation and Object Hierarchies - Assembling a Robot Arranging Objects in Hierarchies Moving Objects and Movement Paths Rotating Objects and Movement Paths Scaling Objects in Relative Mode Tutorial - Animation and Deformation - Dolphin Movements Field Rendering for TV-Output Tutorial - Examples for Particle Animation .topic 51 The picture shows some details from a little demo film made with CyberMotion 3D-Designer. You can see this and other examples in our internet gallery at www.3d-designer.com. Introduction Computer animation is a wide field that is constantly developed further. While only a few years ago an animation film was easily recognizable as such, because of the awkward movements of the animated characters and the always plastic like appearance of the materials, nowadays its often difficult to tell, if the presented scene acts in a real environment, a handmade model or even is a pure computer creation. Particularly since all these possibilities are often combined to get the best results. Admittedly, hundreds of people are involved for years in the production of a professional animation film and the costs are exploding to many millions of dollars. Nevertheless, I think that with CyberMotion you can also realize outstanding animations, since the program offers a wide range of possibilities combined with an intuitive interface that will allow even beginners to master their first steps into animation films successfully. For instance, simple animations can be set up with a few mouse actions, moving or rotating objects on different time positions to their destinations, while the program automatically records the keypositions and interpolates the steps between these keyframes. Furthermore, almost all parameters can be animated, for instance the settings for lights, materials or the background, just by moving to the corresponding position in time and changing a parameter. With these simple parameter animations you can bring life to volumetric fires, running water, moving clouds and volumetric fog or add rain and snow to your animations. Then there are the possibilities for hierarchical object animation - child objects inherit movements from their parents and follow these movements automatically, while still retaining their freedom of movement, and therefore can still execute additional movements independent of their parent objects. Take for example a moving robot, that takes all his subordinated arms and joints along with its movement - while at the same time the joints still perform further rotations about their different joints, or gripping tongs open and close again. Hierarchical object animation is also the basis for the animation of characters. The skin and bones technique uses a hierarchy of bones, a skeleton, which is subordinated to a corresponding skin object enveloping the skeleton. Moving the bones will deform the skin and so the character comes to life. For the animation of the skeleton often Inverse- or Forward Kinematics is applied, a technique that allows you too pull at the joints of an object hierarchy in the same way, as if you would pull at the arms of a jointed doll. Today, most film- and game productions use Motion Capturing to animate characters, an expensive technique that records the movements of real actors and transfers them on to the skeletons of virtual computer characters. However, very successful films have been produced already using nothing more than simple Forward Kinematics. Forward Kinematics appears not to be as comfortable as Inverse Kinematics on the first glance, but it provides a better control of movements, especially if many joints are involved. Another great advantage of CyberMotion's hierarchically animation system is the reusability of movement patterns. For instance, in the Select Objects dialog you can remove a skeleton from its skin (or remove it from the hierarchy, respectively) and switch off all other objects except the skeleton. Then you could save only the skeleton and its animation data via the option "Save Selected Group" into a special "skeleton"-library for a later reuse in other projects. Changing from Modeling Mode to Animation Mode Working on a project is divided into Modeling Mode and Animation Mode. There are two prominent buttons at the top left corner of the CyberMotion window to switch between the two modes. In principle there is no great difference between Modeling and Animation Mode - you have the same tool menus for both work-modes, except that some of the functions in Animation Mode are no longer accessible in Modeling Mode and vice versa. In Modeling Mode all changes made to an object - e.g. the deforming of an object by working on individual points - are permanent changes of the object's shape while in Animation Mode every action is merely a transformation of the model data and can be undone at anytime by reversing the working steps or deleting the keys that were created automatically when manipulating an object in Animation Mode. Another Example: Scaling an object and its children in Modeling Mode will result in a permanent change of size of the model throughout the entire animation. If the children are deformed by this scaling this deformation will be a permanent change of the shape of the objects. If you scale an object and its children in Animation mode it will be only a temporary change of size. Moreover, the children will not be scaled at all - it is just their coordinate systems that are temporarily deformed by the scaling of their parents without influencing the children's object and animation data (see also: Hierarchy Independent Animation). Besides these main differences the working process in Modeling and Animation Mode are very similar. In both work-modes you can move, scale or rotate objects, position and align the camera and start picture rendering at any time. Detailed descriptions about the differences and restrictions, depending on wether you are working in Model or Animation Mode, are provided in the corresponding chapters of the tool menus. Animation Tracks and Keyframes - A First Simple Animation CyberMotion enables animations to be implemented quickly, based on simple, intuitive procedures. Example: You have created a sphere that should move across the screen from left to right in 21 steps. First you have to change to Animation Mode. Thereupon the animation button-strip at the bottom of the screen will be enabled. With the animation button-strip you can readily move backwards and forwards within your animation. Now, in the "Move Object" menu you can move the sphere with the mouse to its initial starting position on the left side of the screen. This operation will automatically generate a position keyframe on a corresponding position track for this object. We will move forward in time now to frameposition 21. Simply press the button (10 steps forward at a time) in the animation button-strip twice or input the frame number directly via the keyboard. Then position the sphere to the right at its final position. This will generate again a new position key for the sphere. From the recorded keypositions in frame 1 and frame 21 the program can interpolate all the remaining frames lying between those keypositions by itself. In the illustration you can see the movement path of the sphere object - the big yellow squares indicate the positions in time where keyframes were generated and the smaller yellow points indicate the between-positions calculated by the program. Now we are already finished with our little animation. Actually, you only had to move the sphere at two different times on the timeline, all the rest was done automatically in the background by the program. You did not even have to use the animation editor to produce this simple animation. In the animation editor the automatically created tracks and keyframes can be edited or you can add new tracks and keyframes, respectively. This is how the position track for this little animation is presented in the animation editor. The starting position and the end position of the sphere are saved in the emphasized keyframepositions on frame 1 and frame 21. Now its time for a first preview animation. To view the movement of the sphere directly in the active viewport window you can operate the Play-button in the animation button-strip or, instead, select the button to render a grid preview animation in the render window. Track Types For each object of the scene - camera, background and light objects included - you can set up keyframe scenes, in which information such as position, alignment, size and parameter changes are held. Corresponding to the type of information recorded, the keyframes are created on separate tracks. The following track types can be added: Position - The position key holds the object position, of course. Rotate - The rotate key keeps the alignment of the object axes system, a rotation axis and a rotation angle. Scale - On this track changes in size are recorded. Parameter - Parameter changes for background settings, light adjustments or the camera focus are saved on this track. On/Off - Objects and lights can be temporarily switched off and on again, respectively. Deform - This track saves the information for the animated deformation functions. Material - Also a parameter track, that keeps all changes of the material adjustments. Corresponding tracks and keyframes are created automatically every time an object is manipulated, but you can also add tracks and keyframes manually in the animation editor. Animation Editor and Animation Button-Strip In theory you can set up complete animations in CyberMotion without the animation editor. With the help of the animation button-strip you can easily move forwards and back. If you go beyond the previously existing total number of frames, the animation will automatically extend to the additional frames. Furthermore, changes to the settings of objects, the camera, the background or the lights are immediately noted and, as a result, corresponding tracks and keyframes for the relevant objects are automatically inserted into the timeline. In the animation editor you can, however, easily edit the individual timelines of the objects. There you can add or remove tracks and keyframes, cut, copy and paste sequences or even transfer the movements of individual objects or whole hierarchies to other objects or hierarchies. The Animation Button-Strip If you change to Animation Mode then the animation button-strip at the bottom of the screen will be enabled. Calling Up the Animation Editor In Animation Mode this button on the left will be enabled and you can call up the animation editor to work on the individual timelines of the objects. Navigation in Time With help of the slider and the green navigation buttons you can readily move backwards and forwards within your animation. The range of the slider is limited by the current length of the animation. To extend the animation range simply use the green jump buttons to go beyond the current length or enter a frame number directly in the frame field located between the green buttons. You can also use your mouse wheel in order to move backwards or forwards in the animation. In this case you have to activate the slider box first with a mouse click, otherwise the viewport's zoom function will be executed instead. The meaning of the navigation buttons: Jump to the start or end of the animation. Jump to the previous or following keyframe of a selected object. This function is evaluated only for those tracks, that are selected on the right end of the button-strip. If, for instance, only the rotation track is selected there, then you would only jump between existing rotate-keyframepositions. Jump 10 frames forward or back. Move one frame forward or back. Viewport Preview Next to the navigation buttons you find the blue buttons for playing the animation in any of the viewport windows to get a quick overview of the movement paths of objects, lights and camera. (To start an animation rendering as a Preview or True Color animation viewed through the camera, however, you can operate the Render Scene Animation- or Render Final Animation button.) Plays the Preview animation in the active viewport window. Press any (mouse) button to interrupt the animation. To play the Preview animation in a different viewport window, activate the corresponding window (mouse-click or over the "windows" menu-strip) and operate the Play button again. Plays the animation within the designated range specified in the Animation-editor. Only the part of the animation between the start frame and end frame is run through. Loop function: If this button is activated the Preview animation will repeat, playing over and over from the start. Ping-Pong function: The Preview animation runs alternating forward and then backwards again, until you interrupt the action. Recording Keyframes Manually With help of the record button you can manually record keyframes for all marked objects. Usually new keyframes are automatically created every time you manipulate an object or when you change a parameter in Animation Mode. But sometimes keys have also to be created on timeline positions where no object manipulation is intended, for instance, to record the alignment and position of an object at a particular moment in time as a starting position for a planned movement from this timeline position. Creating Additional Keyframes for Selected Tracks and in Object-Hierarchies When operating the record-button, then keyframes will be created for all marked objects, but only on those tracks you have previously selected via the five track buttons in the animation button-strip next to the record-button. This applies also for the automatic creation of keyframes. Every time when you manipulate an object a keyframe is generated for all tracks that are activated here. Example: If you move an object from one position to another then usually only a position key is created. But if the rotate track is activated, then a position-keyframe is generated automatically because of the movement of the object and additionally a rotate keyframe because the rotate-track was selected in the animation button-strip. Generating always both, position- and rotate-keyframes, ensures a fixed position and alignment of objects in time. The meaning of the individual track- and hierarchy-buttons: - Position keyframes are included in keyframe creation. - Scale keyframes are recorded - Rotate keyframes are recorded - Deform keyframes are recorded. This relates to the deformation functions, not the skeletal deformation. Using the skin and bones technique will automatically deform objects when moving and rotating the bones. - Parameter keyframes are recorded. This track-button serves mainly to include the camera zoom when recording camera positions and alignment. Parameter changes of light or background settings are only recorded when changing the parameters in the corresponding dialogs. Creating Keyframes in Object-Hierarchies If you have marked an object in an hierarchy, then you can decide with help of the last two buttons in the animation button-strip, wether you want to include key generation also for all children of the selected object or even for the whole hierarchy. - Keyframes are recorded for all children in a selected hierarchy-branch. - Keyframes are generated for the whole hierarchy - for the children as well as for the parent objects. You may ask, since child-objects always follow their parent's movements anyway, why should I waste valuable memory space to generate additional keyframes for all the objects in the selected hierarchy? Well, its just to freeze the complete hierarchy at a particular moment in time as a starting position from which further movements can be planned. Example: The forklift in the illustration above is build from a simple red truck and a fork-object subordinated to the truck-hierarchy. The forklift is moving in 10 steps from its starting position on the left to a destination position on the right (frame 1 to frame 11). After reaching the destination point we want the truck to lift its fork. Consequently, we move again 10 steps forward in time to frameposition 21 and move the fork upwards to its raised position. What happens now, if we play a preview animation? Instead of the truck driving from the left to the right and afterwards lifting the fork, already from the beginning of the animation the fork will start to move upwards. This is due to the fact, that only one keyframe for the fork was generated on the endframe 21, when we lifted the fork to its raised position. By moving the truck on frameposition 11 to the right, a position key was generated for the truck, but not for the fork, since the subordinated fork follows automatically the movement of its parent truck. In this case it would have been advantageous to switch on the creation of keyframes for all objects in the hierarchy before moving the truck. Then a keyframe would have been created also for the fork on frameposition 11 and the fork would not start to lift before frame 11 is reached. Character animation is another example, where the automatic key creation for the complete hierarchy tree as well as for the position- and rotate-tracks is useful, so that the orientation and position of each individual limb of a character is fixed in each keyframeposition on the timeline. Animation Options for Different Object Types Which types of objects and parameters can be animated? Besides normal objects and the camera you can produce keyframes for a variety of settings as for the illumination, backgrounds, clouds, fog, water and fire. To keep track of all values that can be animated, the edit boxes of those parameters are held in a different background color. The illustration shows two parameters, the upper one is a fixed value that is valid for the whole animation. The second parameter that is underlaid with a light orange color creates automatically a new parameter keyframe on a corresponding parameter track, when the present value is changed. Objects In each frame, objects can be given a new position, and can be scaled, rotated or deformed. A new keyframe is automatically generated for each change in the current object data on a corresponding animation track in the animation editor, or, if a keyframe already exists, it will updated, respectively. Furthermore, objects can be arranged in hierarchies. Hierarchical structures are essential for animating complex movements. If an object is hierarchically subordinate to another object, then it performs every action of the parent object, but can also execute movements of its own, independent of the parent's movements. In this manner, complex movements for hierarchically linked object groups can be set up, as, for example, are required for the animation of robots or characters. See also: Arranging Objects in Hierarchies Tutorial - Animation and Object Hierarchies - Robot Edit Skin and Bones and Tutorial - Character Animation Hierarchy-Independent Animation All objects save their animation data in an individual coordinate system belonging only to this particular object. This animation data is thoroughly independent from the world space and also independent of all movements, rotations or the changes of size inherited from other parent objects. Therefore, although we speak of hierarchical animation, the animation data of each individual object is recorded hierarchically independent. This is an important difference, because it enables you to insert (or remove) previously animated objects into existing hierarchies, while still keeping their own movement pattern, and, simultaneously, following then the movements of their new hierarchy parents. What happens for instance, if in an animation you resize a hierarchical group of objects and then, afterwards, you link a newly created object under this hierarchy? The new object will inherit all animated changes of size from its new parents and will grow and shrink together with its parents in the course of the animation. But if you remove the object again from the hierarchy, then it will immediately regain its original position, alignment and size that it had in the first place when creating it, and it will no longer follow the movements of the hierarchy it was previously linked to (if no keys have been generated for the object itself). Another example: You have set up an animation for an aeroplane in which the plane is following a particular movement path. Separately from the aeroplane you did construct and animate a rotating propeller. Now, if you insert this previously animated propeller in the aeroplane hierarchy, then the propeller will be automatically taken along with the aeroplane movements while still keeping its own rotations around its longitudinal axis. The concept of hierarchical independent animation is also very useful for transferring animation data from one object or hierarchy to another. For instance, setting up movements for characters can be a very complex and time consuming task. Therefore it would be a great help, if you could save already set up movements for a later use in a library or if you could copy them over to other objects or hierarchies, respectively. But you don't want to have an exact copy of the movement, since this would result in having the characters starting all together from the same position and moving uniformly in the same direction. And here come the great advantages of the hierarchical independent animation into effect. Because the animation data of each object is saved hierarchically independent in a local coordinate system, the relative movement patterns of objects can be copied over to other objects instead of absolute coordinates and angles. Suppose, for instance, you have animated a character in a little walking sequence. You want to create a second copy of this character walking in another direction. To achieve this you only have to copy the character - all animation data will be copied with the model data. Afterwards you simply need to move the copy of the character in "Move Object" mode together with its movement path to a new starting position. Then - in "Rotate Object" mode - you just rotate the figure, again with its movement path included in the rotation, so that it faces into a new direction. If you play now a preview animation you can see, that the second figure really walks with the copied animation data from the first character from a new starting point into a new direction. The animation editor can also be used, to copy relative movement patterns from one object to the other, or even from a whole hierarchy to another hierarchy. There, when copying a sequence from a position- or rotate-track to the buffer, you can decide wether this animation data is to be interpreted as absolute positions and angles or as a relative movement pattern. Material Material keyframes have their own track in the animation editor. Every time you move to a different frameposition in your animation and you change the material settings for an object in the material dialog, then automatically a new material keyframe will be added to the material track of the respective object. In the materials dialog, all parameters that are emphasized by a separate background color can be animated, including the parameters for the landscape texture layers. Example for the combination of background and material animation. Clouds are gathering and snowfall sets in, covering the landscape under a white blanket of snow. Note: The basic size and alignment of textures is defined in Modeling Mode. There you can adjust the position, size and orientation of textures and bitmaps on an object's surface. Then, in Animation Mode, textures will behave very flexible, following smoothly an object's surface even when it is scaled or deformed. Light Objects Lamps, spot lamps, area lights and fire objects can be animated in the same way as all the normal objects and all transformations are recorded on the respective position-, rotate- and scale-tracks (some transformations will only displace a light object, for instance if a standard lamp is scaled in an hierarchy with other objects, it will only move with the deformation). Parallel light-sources and spot lamps can be adjusted in alignment and in spread (spotlamp). Since parallel lights are not positional, all parameters including the angles for the light incidence will be recorded on the parameter track of the parallel light object. The intensity of lights can be animated for all light-sources, so that they can gradually blend into different light-colors. Volumetric fire is burning anyway, so you just have to adjust the initial values. But of course size, position and alignment can be animated, too. The light-effects can also be animated. All settings for lens flares that involve the size of light halos - circular, star or spots - as well as the intensity of the light halo (color), light-circle and spots can be changed at each key. The angle of rotation for the star-shaped lens flares can also be animated. This way you can, for example, generate a radiance that rotates about a light source. Animation of size and intensity parameters enables wonderful light-explosions to be generated, for example. Example: A meteor within a volumetric fire-cylinder Light sources can also be made hierarchically subordinate to other objects, thus following their movements. This is important if you want to arrange a light source as a vehicle headlamp, for example - as you then only have to position the light source once. If the vehicle is then animated, the light-source automatically moves with the headlight object. Camera Camera-position and alignment are recorded on the respective position- and rotate-tracks for the camera. The camera's zoom-values are saved to parameter keyframes on the camera's parameter track. The camera can also be made hierarchically subordinate to other objects and thus follow the movements of the objects. For example, if you arrange a camera hierarchically under an aircraft, then, for the animation, only the aircraft is moved. The camera automatically moves with the aircraft's movements. Background There are manifold options to animate the background. Almost all background parameters can be animated, for instance, in an atmosphere background the color range for the sky can be animated as well as moving cloud layers and fog banks, shining rainbows, snowfall and rain and even the stars in the sky can be animated. Like with the light and material settings, changing a parameter in the background dialog will automatically create a keyframe on the parameter track of the background object. .topic 65 - Menu "Option - Animation" - Short Cut: + "A". In the animation editor you can edit the timelines of individual objects and adjust the general animation settings. The Timeline Window The timeline window is split into the object list on the left side and the animation tracks containing the keyframes on the right side. You can change the proportion of the split screen by moving the vertical bar located between the two areas. Above the animation tracks the timeline is displayed. The time is measured in frames which corresponds to the number of pictures rendered for the animation. When calling up the animation dialog, the current frameposition is highlighted in the timeline and an additionally vertical red frame indicates the frame-column in the track window. (Selected tracks are also marked by a horizontal red frame). With a simple mouse click on the timeline or one of the animation tracks you can change the current frameposition. If you click on a frame that is behind the last frame of the animation, then the animation automatically extends the relevant number of frames. In front of the timeline to small arrow buttons are provided to increase or decrease the visible frame zone of the timeline. With the scroll bar at the bottom of the timeline window you can move backward and forward on the timeline. You can also select a whole frame zone - to copy or relocate for example. Choose the first frame of the required area with the left mouse button and then, while also pressing the -shift button, select the last frame of the required area. In the foregoing illustration the selected frame zone stretches from frame 5 to frame 23. You can even mark the frame zone of the complete animation, from frame 1 to the last frame of the animation, by double clicking on a track name. If, for instance, you want to mark the complete position track, then double click on the "Position" text in front of the track. Selecting Objects and Tracks To select an object simply click on the object's name in the object list. All of the animation tracks belonging to the object will be automatically selected with it. If you want to edit only a single track, then just click on the track's name in the timeline window. Thereupon the track will be marked and also the respective object belonging to it. To select (or deselect again) further objects or tracks simultaneously press the button on the keyboard while clicking. If you press the -Shift button instead of the button all objects (tracks) are selected or deselected that lie between the first and last selected objects (tracks). Using the key combination + you can mark all objects and tracks at the same time. Undo and Redo The last 50 operations in the animation dialog can be undone immediately by a separate Undo/Redo function in the animation dialog. The buttons are located at the top right corner in the dialog window. Of course, after leaving the dialog, all changes can be undone as a whole via the general Undo/Redo functions in the main button bar. Add or Remove Tracks and Keyframes The functions to add new tracks and keyframes or to remove them again are provided in the "Edit Selection" box. The same functions can be obtained from a selection list, that opens when you click with the right mouse button in the timeline window. Add Tracks Every time when you manipulate objects in the viewports or when you change the parameters of an object, then automatically a corresponding animation track is created for the object in the animation editor. But you can also add new tracks by hand, for instance, if you want to transfer an animation sequence from one object to another object for which no corresponding track has been created yet. When you operate the button then a list box opens from which you can choose one of several track types. Not all track types are suitable for each object type. For instance, the background object can not be moved or rotated and therefore, no position tracks or rotate tracks can be added for the background. Choose the "Add all Tracks belonging to this Object Type" entry from the list, if you want to add all tracks, that can be created for a particular object type, at the same time. The illustration above shows all possible track types. For clarity, every track type is emphasized by a different color. The example above shows also, that for the background object only two tracks are relevant, that is the parameter track and the On/Off track to switch on and off objects temporarily during the animation. For normal polygon based objects all track types except the parameter track can be added. Parameter tracks are reserved for the settings of light sources, the background model and the camera zoom. But, if you would change a normal object into an area light source, then you could also add the parameter track for the area light object. The track types: For each object of the scene - camera, background and light objects included - you can set up keyframe scenes, in which information such as position, alignment, size and parameter changes are held. Corresponding to the type of information recorded, the keyframes are created on separate tracks. The following track types can be added: Position - The position track records position keyframes. Rotate - The rotate track records rotate keyframes which keep the alignment of the object axes system, a rotation axis and a rotation angle. Scale - On this track changes in size are recorded. Parameter - Parameter changes for background settings, light adjustments or the camera focus are saved on this track. On/Off - Objects and lights can be temporarily switched off and on again, respectively. Deform - This track saves the information for the animated deformation functions. Material - A special parameter track, that keeps all changes of the material adjustments. Delete Tracks Operate to remove a selected track along with all its keyframes. Add Key You already know the record button located in the animation button-strip with which you can create keyframes for selected objects on the current frameposition. Similar to this function, here in the animation editor you can operate the button to add a keyframe to all previously selected tracks. The data saved to this keyframes will be calculated from the adjacent keyframes lying next to the selected frame. Remove Key You can transform selected keyframes into normal frames by operating the button. If an entire frame zone is selected all keyframes included in this area are transformed. No frames are deleted, however, and the animation therefore retains its previous length. Delete Selection If you have selected a frame zone you can completely delete it using the button. Keyframes as well as normal frames will be deleted and the remaining frames following on the selected frame zone will move up to fill the gap. If all objects and tracks are marked, then the whole animation is shortened by the selected number of frames. Insert Frames Operate the button if you want to insert additional frames at a particular point between two keyframes. Via the editfield next to the button you can input the required number of frames, that will be inserted with each click of the button. Move Frame Range The number of steps between two keyframes controls the speed of an object's movement. The relationship of the distances between adjacent keyframes has to be adjusted therefore quite often, to ensure smooth movements. You can do this by moving a selected keyframe (or frame zone) back and forwards between its neighbouring keyframes. Just click in the selected frame zone and move it to the left or to the right while holding the left mouse button pressed. The example above shows a sphere moving in 20 steps from the left (Keyframe 1) to the center of the screen (Keyframe 2). Then, 10 further frames forward in animation time, the sphere is moved to its destination point on the right side of the screen. Now play an animation preview. The sphere moves slowly to the center of the screen and then doubles its speed for the rest of the course. Since only half of the moving steps are used for the second stage the sphere also needs only half of the time for the remaining distance. In general - short movement steps result in slow movements and long distances between movement steps stand for high speed movements. But if, for instance, in the animation editor you would move the second keyframe from frameposition 20 backwards to frameposition 15, then an equal number of movement steps were used for the first stage from frame 1 to frame 15 and for the second stage from frame 15 to frame 30 - the sphere would move with constant velocity over the complete course. Switching Objects Off or On Again During an Animation If you want to hide objects temporarily in the animation, then simply add an "On/Off" track. To hide an object at a particular point you simply need to insert a keyframe on the track using the button. To show the object again simply add another key at the required position on the track. The functionality is very easy - every new inserted key will reverse the visibility condition at the current frame position. The illustration above shows the "On/Off" track of a light source. The light is switched off at frameposition 11 and switched on again at frameposition 21. In the timeline the hidden periods are recognizable in that the relevant frames are no longer filled rectangles but are represented by a rectangular framework only. In the viewport windows objects on hidden frame positions are drawn only in simple grid mode and the grid color is dimmed somewhat to the background color. You can still select these objects and work with them, but you also recognize that the object is hidden in the final rendering of a picture on this frameposition. Moving Objects on Curved B-Spline-Paths Normally an object (lamp, camera, etc.) moves on a straight line to the next object-position. In the above illustration is a sphere moving along a path that is defined over six keyframes. The movement between the individual keyframes will be linearly interpolated. This type of calculation of between-positions is very simple, however it is not very suitable for more complex movement-paths. The use of B-Spline interpolation of the key positions permits complex movement-curves to be generated simply, without which you would have to set up an immense number of key scenes. If B-Spline interpolation is switched on for a keyframe, the keyframe positions are used as reference points for the calculation of a curve that passes through these points. The picture shows the same key positions as previous. However, for keyframes 3 - 6 the B-Spline interpolation is switched on. From this, in the picture you will see how a straight movement from keyframe 1, through keyframe 2 to keyframe 3 is defined, and then a curved movement through keyframes 3 - 6. The B-Spline interpolation can be switched on for the individual keyframes of the position track. The illustration above shows the position track of our sphere example, when the sphere moves only in straight lines from one key to another. In the timeline the linear movement of objects between two neighbouring keyframes is also indicated by a straight line running from one key to the next one. Now we select the frame range covering the last 4 keyframes (First click on frame 21 and then hold down the key and click on frame 51). Operate the button to switch on the B-Spline interpolation for the last 4 keyframes. Now, in the animation editor a curved line is drawn between those keypositions, where B-Spline-interpolation is switched on. To switch the B-spline interpolation off again for individual keyframes, choose the button next to the button. If you continue your animation and you generate new position keyframes, then the new keyframes will automatically adopt the B-Spline-status of the preceding keyframe. This way, you only have to call up the animation editor, if you want to change the movement mode from linear to curved movement or vice versa. Acceleration and Deceleration If you do not want jerky movements in your animation or the objects should not always move with a uniform speed, then you need some kind of acceleration or deceleration. The acceleration values can be input in the corresponding "Acc./Decelerate" parameter box. Acceleration can be defined for all tracks except the visibility "On/Off" track. So you can not only accelerate movements but also rotations, changes of size or even the interpolation of parameters between keyframes. You can only edit the acceleration values of a single selected keyframe at a time. So, first select a single track of an object and then the respective keyframe. As a result, the current acceleration values for the selected keyframe will be displayed in the "To Key" and "From Key" edit fields and you can adjust the values. You can input a value between -1 and +1: +1 = maximum acceleration 0 = no acceleration, which results in a uniform movement -1 = maximum deceleration The following examples all refer to position tracks, because from the movement path of an object you can easily recognize where an object is accelerated (movement steps get longer) or decelerated (distances get shorter). The starting situation - a cone is moving with constant speed in 20 steps from keyposition 1 to keyposition 2. Now, on keyposition 1, a deceleration value of -1 is input for the "From Key" parameter, which results in a soft deceleration until the object comes to a halt in key 2. In the timeline you can see a little triangle now behind key 1. This triangle indicates that an acceleration value has been entered for the keyframe. If the triangle is located in front of a keyframe then a "To Key" acceleration value has been input which effects the velocity when approaching the keyframe. If the triangle is located behind the keyframe, then a "From Key" acceleration value effects the speed of an object directly after running through this keyposition. Furthermore, you can recognize from the inclination of the triangle, if a positive acceleration value has been input ( ) or a negative deceleration value ( ). This is the movement path for the deceleration. Clearly visible, the movement steps become shorter in the course of the movement, but not continuously. After about two third of the way the speed of the cone and with it the spaces between the individual movement steps become constant again. This is due to the fact that for the second key no acceleration value for the approaching stage has been input via the "To Key" parameter. If you also input for the second keyframe a value of -1 for the "To Key" parameter, then you get the following result: Now the movement is continuously decelerated from the starting position up to the end position in keyframe 2. The two triangles in the timeline show also that after the first key and before the second key deceleration values have been input. With help of the two "From Key" and "To Key" parameters you can create a complete acceleration and deceleration sequence between two keyframes. A value of +1 for the acceleration from the first keyframe and a deceleration value of -1 for the approaching phase to the second keyframe results in the movement path shown in the illustration above. Slowly starting, the cone rapidly gains speed and then slows down again. This is also an ideal setting for a rotation sequence, if you want to start and end the rotation with smooth movements. Actually, you can use this setting for all animation tracks where smooth starts and end phases are required. This applies also to the scale track and the parameter track. Take care to coordinate the acceleration values of adjacent keyframes. The illustration above shows an example of a strong acceleration towards keyframe 2 but no acceleration has been entered for the "From Key" parameter in keyframe 2. As a result the cone speeds up considerably towards keyframe 2, but then, abruptly, the movement continues with a much slower and constant pace. This example looks much better. To get a smooth transition in keyframe 2, in addition to the acceleration value specified for the "To Key" parameter also a deceleration value has been entered for the "From Key" parameter of this keyframe. Because no acceleration value has been entered for the following keyframe 3, the deceleration turns slowly into a constant speed again. The illustration shows the corresponding timeline for this movement. The triangles with the rising gradient after the first and before the second keyframe indicate the acceleration phase, whereas the declined triangle behind the second keyframe indicates the deceleration phase. No triangle in front of the third keyframe, so the object approaches with constant speed. Cut, Copy and Paste a Frame-Zone Animation sequences can be cut or copied to the clipboard and - after marking the destination selection - can be copied to another timeline position of the same object or any other object. Even the entire animation data of a character hierarchy can be copied this way, provided that the structures of the hierarchy trees of source and destination selection are similar. If, for instance, you copy the movement data of a character's skeleton over to another skeleton, then the order, in which the spines, arms and legs of the source skeleton are arranged in the hierarchy, have to match with the grouping of the destination skeleton. If the source hierarchy contains more objects than the destination hierarchy, then the unnecessary data of the remaining objects will be ignored. The same applies if you try to transfer animation data among incompatible track types. So just try and test everything. You can't do anything wrong. Use the undo functions frequently. And, when animating complex character animations, never forget to make (numbered) backups of your project as often as possible. Another special feature of the copy functions - you can either transfer the keyframe data in absolute mode - as a fixed target position and orientation that an object will use to get in exactly the same position and orientation -, or alternatively, you can copy the relative movement pattern, meaning the movement vectors and rotation instructions used to get from one keyposition to another. Furthermore, with help of the Multi-Paste function, the animation data can be repeatedly copied to its destination position. Examples, demonstrating these powerful functions will follow, but let's start first with the description of the five cut, copy and paste buttons. Cut - The marked frame zone is cut out and copied to the buffer - the area on the right of the marked range then moves up to fill the gap. Note, that not only the keyframes are copied to the buffer but also the number of "empty" frames contained in the marked frame zone. Later, when pasting the buffer data into an animation track again, an exact copy of the cut out frame zone will be inserted, including the empty frames in front and behind keyframes. Copy and Remove Keys in Selection - The marked frame zone is copied to the buffer. After that the keyframes in the marked frame zone are transformed to empty frames. No frames are deleted by this action, the animation therefore retains its previous length. Copy - The marked frame zone is copied to the buffer. To transfer the animation data from the buffer to a destination track, select the corresponding track and mark the frameposition where you want to insert the data. Alternatively you can mark a complete frame zone again, if you want to replace the marked selection with the animation data from the buffer. Paste - Replace Selection - The destination frame zone will be deleted and replaced by the data copied from the buffer. Paste - The animation data is inserted in front of the selected destination frame. Absolute or Relative Copy of Position and Rotation Key Data As already mentioned above, you can either transfer the keyframe data in absolute mode - as a fixed destination position and orientation, or alternatively, you can copy the relative movement pattern, meaning the movement vectors and rotation instructions used to get from one keyposition to another. With help of the buttons depicted above, you can decide which mode is used to copy the data to the buffer. This buttons are only relevant for the cut and copy functions. Once you have copied the data to the buffer, you can't switch over to the other mode for the paste functions. Absolute Mode - The keyframe data of the source object will be copied as a fixed destination position and orientation that is transferred to another object, which will use this data to get in exactly the same position and orientation (in regard to the object axes systems of these objects). Relative Mode - The movement vectors and angles as seen from the source object's local coordinate system will be applied to the destination object's local coordinate system. If, for example, a character is moving "forward" along its local z-body axis and this movement is copied in Relative Mode to the destination object, then the destination object will perform this movement along its own z-body-axis. It is much the same for rotations - the copied rotation data of the source object will be used to rotate the destination body about the axes of its own local coordinate system. Copying a relative movement pattern instead of absolute positions and angles is a powerful tool, therefore some examples follow: In the illustration you see 2 arrow objects. For the green "arrow1", 3 position keys were created every 10 steps further in the animation, just by moving the arrow to the right, then to the top and finally to the right of the screen again. Now, we want to copy these 3 position keys over to the orange "arrow2". We call up the animation editor, select the position track of object "arrow1" and then we mark the frame zone on the timeline which includes all 3 keyframes. Now copy the frame zone in "Position - Abs." mode as absolute positions to the buffer. Then select "arrow2" in the object list. The selected frame zone can be left as it is, we only need to operate the paste button now and the keyframe data will be copied over to the position track of "arrow2". As a result, the orange arrow moves from his starting position to the first copied keyposition and from there on he moves on an identical path together with the green arrow. Now we restore the initial project again by undoing the previous copy function. Back in the animation editor we repeat the previous work steps, but before copying the data to the buffer we switch over to the relative mode by selecting the "Position - Rel."button. The illustration shows the scene after copying the data from the green arrow over to the orange arrow in relative mode. Since only the relative movement vectors have been transferred, the orange arrow moves on a parallel path with the green arrow. But as mentioned before, the transfer of relative movements is dependent of the orientation of the object's local axes system. So what would happen, if we had rotated the orange arrow into another direction before copying the data from the green arrow over to it? Theoretically, it should move then "forward" along the directions of its own rotated axes system, but with the movement pattern copied from the green arrow. And that's exactly how it is. For the example in the illustration above the orange arrow was rotated first about 45° clockwise. After copying the movement from the green arrow over to the orange arrow, the orange arrow moves diagonal downwards along its own local axes system, but with the copied movement pattern of the green arrow. The previous examples referred only to position tracks. The arrows were merely displaced on the screen without further rotations turning them into the direction of the path. Here comes another starting situation. The movement path of the green arrow keeps the same, but on the first keyposition it rotates upwards and on the second keyposition it turns again to the right, so that the arrow always faces towards the direction of the path. Now let's copy again the movement including the rotations over to the orange arrow. This time we select both, position and rotate track, to copy the selected frame zone from arrow1 to arrow2. Before copying the animation data to the buffer we have to choose the copy mode again. For the position track we select relative mode again, but for the rotate track we select "Rotation Abs." first. Let's see what happens. Like before, the orange arrow moves correctly diagonal downwards towards the first keyframe position, but from there on it moves again parallel with the green arrow. Why is that? Well, the absolute orientations of the green arrow's axes system have been copied over to the orange arrow's axes system, therefore in keyframe 1 and all following keyframes both arrows face in the same direction. Consequently from this point both arrows move on parallel paths in the same direction. This illustration shows the scene after copying both the position and the rotate track in relative mode. This time everything is like we would expect it. Both objects are moving along their own axes systems while simultaneously turning towards the direction of their own movement paths. Only the movement vectors, the rotation axes and angles have been copied and transferred from the object axes system of the green arrow to the local object axes system of the orange arrow resulting in a copy of the movement pattern instead of fixed positions and alignments. To reveal the real power of the copy functions we continue with some more complex examples. Suppose, for instance, you have animated a character in a little walking sequence. You can copy this sequence to let the figure repeat its movement or you can copy the movement over to another character. You can think of several cases: Creating a copy of the whole object hierarchy of the walking character and moving it to another position from which it then walks in another direction. To achieve this you only have to copy the character hierarchy in the Select Objects dialog - all animation data will be copied with the model data. Afterwards you simply need to move the copy of the character in "Move Object" mode together with its movement path to a new starting position. Then - in "Rotate Object" mode - you just rotate the figure, again with its movement path included in the rotation, so that it faces into a new direction. If you play now a preview animation you can see, that the second figure really walks with the copied animation data from the first character from a new starting point into a new direction. And the best of it, you did not even have to call up the animation editor once. But the characters are still moving uniformly with matching step sequences. Now you can call up the animation editor and move the complete walking sequence for one character a little bit to the right, so that the character starts somewhat later to walk. Copying a movement pattern (position track and rotation track in relative mode) in the animation editor from the source character over to a similar structured second character. The destination hierarchy is located at a different position and faces in another direction, but nevertheless, it will adopt the movement pattern from the first character and move along its own line of vision. But copying relative movements will work only properly if both characters are arranged in the same pose, so that the alignment of the bones among each other in the source hierarchy matches with the alignment of the bones in the destination hierarchy. For instance, a movement which brings a standing figure to its knees can only transferred to an also standing figure. But of course every pose can also be copied in absolute mode in the animation editor, so that transitions from any pose into any other pose can be animated. But since the destination hierarchy will adopt the absolute positions and angles of the source hierarchy, you have to move and rotate the destination hierarchy afterwards to its desired end position and direction. Example: - You have animated a character, so that it walks a few steps straight forward, kneels down, stands up again, and continues to walk some steps forward. Now you want the character to kneel down once more. You have already animated this movement before, when the character knelt down for the first time, so you just need to copy the keyframe data (in absolute mode) from that particular frameposition to the end of the animation, where the figure should kneel down a second time. When playing a preview animation you can observe that the figure in fact kneels down at the end of the animation, but at the same time it flows back into exactly that position it occupied when first kneeling down. This is not unexpected, since we copied the absolute positions and alignments of the character. To correct the displacement you just need to move the character now in its knelt down pose to the required position at the end of the animation Another example: Extending a walking sequence The illustration shows a character animated in a full walking sequence about 5 keyframes. The sequence starts with the character standing with the right leg in front of the left leg and then making two steps forward until it stands again in the same pose with the right leg in front. Up to this stage we had to animate the sequence by hand. Now we want the character to repeat the sequence 3 times, so that it walks another 6 steps forward. We will do this via the copy functions of the animation editor. The question is now, which frame zone do we have to select for this operation? The character's pose in key 1 and key 5 is identical, both times standing with the right leg in front of the left one. If we would copy now the complete range about all 5 keyframes, then after each full sequence we would have two succeeding keyframes holding the same body posture and the animation would falter at this moments. Therefore, we include only the keyframes 2 to 5 for the copy operation. In the animation editor we select the whole hierarchy of the character by clicking on the root object of the hierarchy (usually the skeleton's skin object). Then we select the frame zone that includes keyframe 2 to 5. We include also the empty frames lying between keyframe 1 and keyframe 2, so the selected frame zone will reach from frame 2 to frame 25. The character should walk on independently after copying the animation data, without us having to correct the positions in each keyframe. Therefore we choose the relative copy mode for the position tracks, so that only movement vectors are transferred instead of fixed positions. In this example, the rotation track can be copied in absolute mode as well as in relative mode. If you rotate in relative mode, only a rotation axis and a rotation angle will be copied and not the alignment of the object axes itself, but since keyframe 1 and keyframe 5 hold the same axes alignments in their keyframe data, it doesn't matter if you copy the rotation instructions leading from keyframe 1 to keyframe 2 behind keyframe 5 or the absolute alignment of the axes system. Both will rotate the bones in the correct positions. Now just operate the copy button to save the animation data to the buffer. Then select frameposition 26 as the new destination position in the timeline. Before pasting the animation data to this frameposition we first adjust the number of copies we want to insert. Since we want to repeat the walking sequence 3 times we also input a corresponding number into the "Multi-Paste" edit field of the copy box. Ok, that's it, now you can click on the -paste button to copy the animation data from the buffer to the selected tracks. This is how the timeline is presented after the paste operation... ..and here you see the final animation. The character walks smoothly through his 4 walking cycles, performing 8 absolutely perfect steps. An example of a character already animated in a simple walking sequence is provided in the projects-folder under "..projects/character/man_walk.cmo". You can also find the pure animated skeleton (without skin) of that scene in the project folder "..projects/character/skeleton_walk.cmo". The model of the character was provided kindly by the artist Stefan Danecki. Duration of an Animation, Animation Range and Play Speed You can set the duration of an animation, the overall number of frames to be rendered and the playing speed in the "Animation Range & Speed" box. Expanding the Frame Range of the Animation There are several possibilities to extend your animation with additional frames: You can simply increase the End Frame parameter which defines the target frame at which the animation should end. If you click with the mouse on a timeline position that is behind the last frame of the animation, then the animation automatically extends the relevant number of frames. In the main menu, after leaving the animation editor, use the "jump forward" navigation buttons of the animation button-strip or the frameposition edit field to extend the animation. Duration of an Animation If you click on the -clock icon in the "Animation Range & Speed" box, then a little dialog opens and you can change the duration of the animation by editing the number of total frames or the time of the animation. If you expand the animation, additional frames are inserted evenly between all keys. This will make the animation longer and - with constant playing speed - slower. If you shorten the animation then frames will be deleted between keys (the animation becomes shorter and faster). But be careful, if the animation is shortened to radically and if keys are positioned very close next to each other this may result in deleted keys and a change of the animation. Part-Render an Animation You can render parts of the animation with help of the keyframe start and end parameters in the "Animation Range & Speed" box. This way you may render a first part of the animation to check the quality and rendering settings and then later the remaining part of the animation. You can also spread the animation over several computers by rendering a different part of the animation on each computer using the keyframe start and end parameters. Use a video post processing program to combine the different parts of the animation videos. The same way you can interrupt the rendering of an animation in a certain frame, save the video and later continue the rendering in a second video using this frame as start parameter for the rest of the animation. Look out for links to free video post processing programs on our web page. Playing Speed The fps-parameter stands for frames per second (number of pictures per second) and determines the playing speed of an animation. This value, as well as being used during preview animations, is also saved as the playing speed with the generated AVI-file. Draw Animation Path With the path selection buttons you can decide for which objects the movement paths are drawn in the viewport windows: Selected Objects - Only for the marked object selection animation paths will be drawn. All Objects - The paths of all objects are drawn. None - No paths are drawn. Only Parents - For clarity, only the movement paths for the top most parents in selected hierarchies are drawn. Motion Blur Moving objects and backgrounds appear smudged on photos and films - the amount of this movement blurring depending on exposure-time, object and camera-speed. With Motion Blur you can reproduce this effect over several consecutive pictures of an animation. At the bottom right of the animation-editor is a small box, in which you can switch on the effect with the button. It gives you two further parameters, which decide the number of individual pictures to superimpose and the number of additional pictures to render in-between: Frames - Determines the number of individual pictures to apply the effect to. These pictures would be calculated in the course of the animation anyway so calculations for the Motion Blur effect is restricted to the pictures already being rendered. Tweens - If this parameter is greater than 0, additional between-pictures are rendered, which should reinforce the smear-effect. These pictures do not go into the animation, but serve only as storage for the Motion-Blur effect and are then deleted again. Example: You want to render a small animation of 10 pictures with Motion Blur effect. You enter a value of 3 for Frames and a value of 2 for Tweens. Two additional pictures are calculated between each existing frame, increasing the number of the pictures rendered for this animation to 10+ (10-1)* 2 = 28 pictures. To render the Motion Blur effect for each frame of the 10 picture animation, 3 Frame pictures + (3-1)* 2 Tween pictures = 7 pictures are overlaid. Do not render tweens unless absolutely necessary, as the calculation of each picture between requires the same length of time as the calculation of the "normal frames." Whether or not tweens are required depends on the speed of movement of individual objects, and therefore the number of frames between two keyframes and the required effect. Large steps without additionally calculated tweens produce a stroboscopic effect (that may be what was wanted). If you plan to render your animations as interlaced videos for TV-output with field rendering switched on, then remember that with field rendering twice as many pictures (each of half resolution) are rendered, and this can also reduce or eliminate the need to render motion blur - which can save rendering time. Here you see 3 examples for the Motion Blur effect. The sphere moves from the right to the left, the step-width is the same every time - only the parameters for Frames and Tweens are changed: Left - Frames 2, Tweens 0. Center - Frames 5, Tweens 0. Right - Frames 5, Tweens 2 Individual Pictures with Motion-Blur Effect The Motion Blur effect can also be used in rendering single pictures. If in an animation with Motion Blur you render only a single picture, then all pictures are automatically calculated that lie prior to the picture being rendered and are required for the over-lay. Start Animation Calculation To start rendering of an animation simply press or . Both options are also available in the main menu under "Render" and via the Render Scene Animation and Render Final Animation buttons in the button strip. See chapter Render Image or Animation for more information regarding the difference between both options and for further instructions to save animations. .topic 92 CyberMotion always renders an animation as a True Color picture sequence (*.BMP, *.JPG, *.PNG, *.PCX or TGA file format) or as a 24 bit AVI (high quality, uncompressed) video file. These uncompressed (and therefore without quality loss) video files can become very large but can easily be converted to simple GIF-animations (suited only for short sequences and limited to 256 colors) or compressed video files using third party free- or shareware converter or video processing programs. For instance, you can use the Windows® Movie Maker which is part of most Windows® installations. You should always try a variety of different compression-algorithms/encoders before deleting the original uncrompressed file because output quality and compression rates differ widely. .topic 71 Particle explosion with lens flare effect and motion blur - see demo file "projects/particle/explode/explode.cmo" Particle Systems Particle systems are always useful if a great number of objects are to be moved and it would be very wasteful to animate everything individually. With the help of a particle-system you simply choose an existing reference-object from the program, then automatically generate up to several thousand copies, which continuously move under the influence of gravity, friction and turbulence, or simple follow the movements of the reference-object. As all facet-based objects can serve as the original for particles, the list of possibilities is almost endless. Each particle is a complete object with regard to its material-attributes and reflection, transparencies or casting shadows. Animated bitmaps on the reference-objects are also reproduced on the particles. Under the influence of gravity and other physical forces it can simulate explosions, snow, whirlwinds, meteor-swarms, insect-swarms, ballistic flying-objects, volcanic eruptions at irregular intervals and various others. Particle System Generation Particle actions are not especially difficult to set up. All the parameters for a complete particle action can be set up in the one dialog for particle systems. Only for collision-tests need you change to the material editor in order to select objects that are to be considered for collisions with particles. Furthermore you can temporarily switch the particle actions you have set up off or on again with the "Particle-Systems" button in the render options dialog. Storage Capacity and Particle-Objects The number of particles represented is only restricted in practice by the available storage space. However, the particles generated by a particle action in a specific frame are combined into one object internally by the program. If the upper limit is reached for the maximum number of represented objects during an animation no new particles will be generated until space is created by the "death" of existing particles. Previewing a Particle Animation Particles are not drawn in the viewport windows where you are working. A preview can be called, however, any time via the "Render Scene Animation" function. The particles are not displayed in the viewport windows due to the complexity of the particle animation. New particles are constantly being generated and destroyed during particle-actions. On each particle, which is moving with a certain speed and direction, additional external forces like chaotic turbulence, whirlwind and gravity are continually at work. Additionally, there is superimposed the movement of the reference object and also the changes of direction with regard to particles rebounding in collisions with other objects. Therefore, to show a particle-action correctly in the viewport-window at any one frame position, practically the entire record of the particle animation would have to be calculated and that is quite wasteful and time consuming. Particle System - Overview Particle-Dialog Managing particle sytems and defining lifetime and valid range of an particle action Particle-actions - add, delete, copy Selecting a particle-object and determining the number of the particles Particles- frame zone, generation intervals and lifetime Initializing the random number generator Stipulating the scale of an area-unit Save/load particle actions The particle "Generation" - property sheet Determing start position and initial movement of particles Start position, rotation and scale Movement-vector of the generated particles Superimpose the movement of the reference-object Spinning The particle "Action" - dialog side How physical forces influence the particle movement Acceleration and friction Turbulence Whirlwind Color-intensity fade The particle "Collision" - dialog side Switch on collision test and select particle reflectors See also Tutorial - Particle Animation Examples .topic 66 - Menu "Options - Particles" Particle-Actions - Add, Delete, Copy In the editor at the top left is a list-box in which all the defined particle-actions are displayed. By clicking on existing particle-actions in the list box you can change to and fro between these particle-actions and plan additional alterations. New Particle-Action To add a new particle-action, operate the button directly beside the list box. Delete Particle Action To remove a particle-action from the list, click on the action in the list box and then operate the button directly beside the list box. Copy Particle-Action If you require a second particle-action that differs only marginally from an already existing action, then select the existing action in the list box and copy it by operating the button. You can then plan the alterations on the copied action. Edit Names and Descriptive Text Click in the small bottom field beside the list box in order to change the name of the particle-action. Click in the field directly beneath the list box in order to input a short descriptive text for the particle-action. Selecting the Particle-Object and Determining the Number of Particles An an existing reference object serves as the original for the production of particles. During the animation multiple copies of this object are animated dependent on the movement-restrictions and physical sizes. Only facet-based objects can serve as a reference-object. Analytical and light-objects cannot serve as a reference. Selecting a Reference-Object Operate the "Particle Reference" button to select an object for a particle-action. An object-selection window appears (as you are already familiar with from the object-selection dialog or the material-editor). Simply click on the desired object with the left mouse-button and then leave the dialog. The name of the object appears on the "Particle Reference Object" button. Determining the Number of Particle-Objects In the "Particle Reference Object" box you can decide the number of particles that are produced whenever generated by the defined particle-action. The overall number of particle-objects during an animation depends on several factors. For example, a particle-action can be defined that generates particles once, at intervals, or even in each frame of an animation. In additional to the particle-number parameter is another parameter marked "±". This serves to produce an amount of randomness into the particle-actions. If, for example, a value of 50 is given for the number of particles and a value of ±49 for the variance, every time particles are generated there are between 50-49= 1 and 50+49= 99 particle-objects generated. Particles - Frame Zone, Generation Intervals and Lifetime In the bottom half on the left side of the dialog you can enter all parameters relating to the time sequence of particle generation and lifetime. Particle Animation - Range of Validity and Lifetime Particle-actions are independent of the settings in the animation-editor. You can specify the frame zone for the duration of a particle-action with the "Range of frames - from - to" parameters. The first particles are always generated at the start of the specified action. If the particle-action goes beyond the specified range, all particles that were generated by the action are deleted, regardless of their lifetime. Generating Particles at Intervals and over a Specified Time Period The precise meaning of the following parameters arises from the text in the dialog-box, therefore an example is given: Range of frames from 1 to 500 - This particle action takes part from frame 1 to frame 500 within the animation. Create new particles every 75 frames ± 25 frames - Particle action starts with particle generation every 50-100 (75±25) frames. with a time-duration of 25 frames ± 10 frames - Particles will be generated during the 15-35 (25±10) frames following. with a lifetime of 50 frames ± 10 frames - Each particle has an individual lifetime of 40-60 (50±10) frames before it is deleted. This example can simulate a volcano-outburst, for example, in which glowing sparks erupt from the crater at irregular intervals and for different lengths of time. Peculiarities: Set the parameter for the generation of new particles to "every 0 frames" and particles are generated only once. If a value of 0 frames is set for the lifetime the generated particles are visible for the entire particle-action; the particles, therefore, are "immortal". Another example: On many occasions it is not sufficient to generate particle-objects only at the start a particle-action and then animate them. For example, to simulate a jet engine, new particles leaving the engine exhaust are generated in each frame while older particles are deleted. The parameters for this: Generation of new particles every 1 frame ± 0 frames with a time-duration of 1 frame ± 0 frames with a lifetime of 50 frames ± 10 frames There are more detailed examples with completed example-files in: Tutorial - Particle Animation Examples Initializing the Random-Generator You will already have noticed that certain amount of random latitude is allowed for most parameters. The random-number generator can be initialized with different starting values via the random-parameter on the lower left of the dialog. This way you can vary the particle-animation somewhat if you still do not like the current random-settings. Specifying the Scale for an Area-Unit Directly under the random parameter is the scale parameter with which you can specify how many meters correspond to an area-unit in the CyberMotion 3D-space. This is absolutely essential, for the following reason: Speed and acceleration are measured in metres per second and metres per second^2 respectively. Therefore, in order to render a sensible simulation of the action of particles the program must know how many meters correspond to an area-unit in the 3D-space. Save/Load Particle-Actions All defined particle-actions are saved within the "CMO" object-file. However, in addition the or functions of the particle-editor a particle-system library can be built up. The extension for particle-action files is "PTL." .topic 68 Select the "Genesis" tab in the paricle system dialog to bring the property sheet with all starting-parameters for the particles to the front. On this side you can edit all the parameters for newly generated particles, such as position, initial velocity, individual-rotation and so on. Start Position, Rotation and Scale Position of the Particles when they are Generated The start position for generated particles depends on the current position of the reference-object. If the position parameters are zero, new particles are always generated at the origin of the reference-object. You can insert a random offset from this position over the X-, Y-, Z-parameters. All statements of position refer to the object-axes of the reference-object. You also give the direction in which the particles move relative to the axes if they have a starting velocity. Starting-Rotation of the Particles Here you can determine a different random starting rotation-angle of ±180 degrees for each area-axis, about which each generated particle is turned. Example: You have a "meteor" object as reference-object model about which to animate a meteorite-swarm. All the meteorites - being based on the same reference-object - are identical, but, with different views of the meteorites, due to different rotation-angles, you can bring some variation into the scene. In addition, you can also give the meteorites different sizes and provide them with different speeds of rotation. Scaling the Particles Change the size of the particles with the Scaling parameter. The range is given with the ± parameter. Example: Scaling: 1 ± of 0.5 All particle-copies of the reference-object are scaled with a random factor that lies between 0.5 and 1.5. Movement-Vector of the Generated Particles Suppose you want to create a jet engine and in each frame generate particles for the flame that should move in a set direction in an area of ±50 units behind the engine. The movement-vector must also be lined up along the longitudinal-axis of the engine. If the axis of the engine is the Z-axis, you can use this as the direction in the movement vector-selection box. Give a value of 50 for the X-, Y- or Z-position and switch on the option "Place along movement-vector". If this option is switched on, the highest value of the X-, Y-, and Z-position is used for the position along the movement-vector. However, it is only positioned along the movement-vector. You can specify a dispersal for the particle-production with the "Angle of deviation" for the movement-vector so that particles are not generated in a dead straight line behind the engine. In this way the flame spreads out a little. If this value is, for example, ±180 degrees, the particles move randomly in all directions in a sphere. This can be used to simulate an explosion. Specifying the Movement-Vector Along an Object-Axis of the Reference-Object In the "movement-vector" selection box you can select the object-axes of the reference-object, which is then preset as the direction of movement. If, for example, the Y object-axis of the reference-object points to the top and you choose "+Y" in the selection box, the particles will move vertically upwards. Direction is therefore limited to the object-axes, so new particles and their flight-direction always exact line up with the movement of the reference-object. Again, take the example with the airplane jet engine. Within the engine is a reference-object that should emit a continuous stream of particles that line up along a specified axis of the reference-object. The output-direction of the newly generated particles also always changes automatically with the flight movement. The result is a jet stream that exactly follows the flight of the jet. Of course, you can still specify the movement-vector directly by previously rotating the reference-object or its object-axes in the desired direction in the "Rotate Object" menu. Starting Speed The starting-speed parameter gives the speed in metres/second with which the particles start to move along their preset movement-vector on their creation. You can again put in a random deviation from this value over the ± parameter. Incidentally, the conversion of metres/second to kilometers/hour is 3.6. For example: 10 m/ s= 10* 3.6 km/ h= 36 km/ h Note: Because we are simulating an accurate physical model, speed is relative to the scale of the scene and depends on the following things: 1. Speed is measured in metres/second. Both units must correspond to those found in the 3D-space. With the parameter "1 RE= 0,010 m" you can, for example, input how many area-units correspond to a metre. if a vehicle is 3 metres long and you construct it as a 200 units long 3D model, then the correct relationship is "1 RE= 3 m/ 200= 0,015 m." 2. The passage of time in an animation corresponds to the playing frame rate. The position of an object at a frame rate of only 10 pictures per second is moved a greater distance per picture than at a higher playing speed of, say, 24 pictures per second. The distance per second, however, always remains constant. Superimposing the Movements of the Reference-Object Particle-clouds can perform extremely complicated series of movements. You need only animate the reference-object in the normal manner and then switch on the option "superimpose movement of the reference-object" for the particle-action. All particles generated for this action carry out the movements (positioning, rotation and animated scaling) of the reference-object. Incidentally, the reference-object need not be switched on during the animation. You can first switch on the reference-object in the object selection dialog for the preparation of the animation-path and then switch it off again before commencement of the animation rendering. The particles nonetheless follow the animation-path of the reference-object, without the possible distraction of rendering the reference-object. All other outside influences, such as starting-speed, gravity, turbulence, etc., can additionally be switched on as required and will introduce great variety into the set movement-path. Spinning Particles can turn about their own object-axes, independent of all outside influences and the movements of parent objects. The X-, Y- and Z-parameters give the angle through which the object turns per frame about the corresponding axis. The object-axes do not necessarily need to be at the center of the particle-object. You can also generate particles about an imaginary focus, by positioning the object-axes outside the reference-object. On particle-creation the relative position of the object-axes is also copied. By transferring the axes to the outside axes, the particles no longer rotate about their object-focus, but in a circular-movement around the imaginary axis-focus. In this way you can simulate leaves falling from a tree. With some gravity and self-rotation, the leaves hover and move towards the ground with a graceful "propeller-movement". .topic 69 Select the "Action" tab in the particle systems dialog to bring to the front the property sheet with all parameters for gravity, friction, turbulence and whirls. Acceleration and Friction Acceleration This field decides on a value for acceleration. Acceleration is measured in metres per second ^ 2. You can insert the direction in which the accelerating force should operate in the selection box. If "Vertical (gravity)" is entered here a vertical accelerating force works in the negative Y-area axis, which will simulate the acceleration due to the Earth's gravity - in the region of 9.81 m/ s^ 2. This function is vital for many applications. Think of falling snow, bouncing balls, ballistic flight-curves and many other situations, which one can only achieve realistically through the combination of individual movement and gravity. Here again, with the ± parameter you can generate random acceleration for the individual particles. Friction The acceleration due to the Earth's gravity is the same for all objects, independent of their mass - which implies that a sheet of paper, for example, falls to the ground equally as fast as a steel ball. That is true, however, only in empty space. In gas or liquid the actual speed depends on many additional factors such as, for example, the density of the medium, in which the falling body moves, the shape of the surface and the coefficient of friction of the body. A further factor is the air resistance. This grows with the increasing speed of the particles until it is then practically as great as the accelerating gravitational force. The object is then in equilibrium and no longer accelerates. The speed no longer increases and the object falls with constant speed from this point. Forget about the physics: simply vary the acceleration with the ± parameter and experiment with the friction parameter. For the technically-minded: According to Stoke's law for the air-resistance to motion: FL= 0.5 * cw-value of the body * density of the medium * surface roughness * V^ 2. The value (0.5 * cw-value * density * surface) we combine together into the constant k, which then produces: FL= k * V^2. Therefore, a constant times the speed squared produces the frictional resistance. It is this constant k that you can edit with the friction parameter. Acceleration-Direction The direction of the acceleration does not necessarily have to be vertically downwards. Negative values for the acceleration reverse the direction in which the acceleration works. Other acceleration directions are available in the selection box: Acceleration along movement-vector - No matter in which direction your particles move your acceleration or deceleration is always confined to this direction. Acceleration in the direction of the center of the reference-object - An interesting effect is obtained if the generated particles not only follow the movement path of the reference-object, but additionally through the "gravity" of the reference-object are pulled in elliptical paths around this object. Swarming motions can be excellently simulated. A further possible effect is, for example, a solar flare in which continually ejected plasma-particles fall back again into the suns, or maybe, on an explosion directly followed by an implosion. Vertical Acceleration - will simulate the Earth's acceleration. Turbulence Switch to turbulence in order to bring some chaos into any movement-paths that are too precise. The strength of the turbulence can be edited over the parameter of that name. Example: Small smoke-particles that rise slowly from a chimney towards the sky and on their way are swirled and driven apart through air-turbulence. Whirlwind - Rotation About the Y-Axis Whirlwinds can be generated with this function. All particles that are generated in a frame by this particle-action rotate about the common focus of the particles. A whirlwind moving along its path is created by superimposing the movement of the reference-object. Whether the particles turn clockwise or anti-clockwise is determined by the "rotation-direction" in the selection box. Over the rotation-angle-parameters you supply the rotation-angle through which the particles in each frame are rotated about the common focus. You can set different speeds of rotation for the inner and outer particles through different angle-sizes for the inner- and the outer-area. Rotation-speeds for particles between are interpolated. Randomly varied rotation-speeds for each individual particle can again be defined with the ± parameter. Additional disorder can be obtained through switching on turbulence. Example: Generating a conical whirlwind. This is not at all simple in the first instance, because of how the particles are generated to result in an upright cone that can then be rotated. 2 possibilities are described in the following. The start point is a reference-particle-object on the ground, above which the whirlwind should develop. 1st option. A whirlwind at the beginning: Generate particles from the focus of the reference-object, with a position-movement along the Y-axis, such as, for example, position parameters: X= ±0, Y= ±100, Z= ±0. Switch to the function "only positive along movement-vector" The movement-vector required is vertically upwards, therefore choose the + Y-axis as starting movement-vector. To create the particle's cone-shape, we direct the movement-vector through an opening-angle. Ready. The vertical whirlwind can now be animated with the functions whirlwind, turbulence and by superimposing the movement of the reference-object. 2nd option. A whirlwind lifting itself from the ground: This time, generate particles without position-relocation, i.e. position parameters: X= ±0, Y= ±0, Z= ±0. Switch on the function "only positive along movement-vector" Set the movement-vector to vertical upwards by choosing the + Y-axis as starting movement-vector. To enable the particles to rise in a cone, we again direct the movement-vector through an opening-angle. Choose a starting-speed and vary it, so that the particles rise at different speeds and results in a good distribution of the particles in the required cone-formation. Give a friction-value for the particles. The particles lose their kinetic motion-energy through friction and remain for some time, thus resulting in the desired cone-formation. Ready. The whirlwind standing on the ground can again be animated through the functions whirlwind and turbulence, and by superimposing the movements of the reference-object. Naturally, gravity can also be switched on, so that the particles, depending on your vertical initial velocity and the relevant wind-relationships (turbulence) are again pulled to the ground. Color-Intensity Fade For the particle-objects the material-settings are as those of the reference-object. Textures, transparencies or even films projected onto the particles are therefore no problem. However, with "intensity fade" you can achieve another effect specific to particles. If the function is switched on the particles loose their color intensity with increasing lifetime and become black. This function is useful, for example, for the simulation of explosions, where the dispersing particles quickly loose their energy and their glow fades. With the "Fade per frame" parameter you can decide by what percentage per frame the color-intensity of the particles fade. A good effect - especially in front of bright background - can be achieved whereby the "burned-out" particles continue to hang as clouds of black ashes in the turbulent wind. In a particle-animation 80 frames long, for example, in which the particles fade completely over 40 frames, requires a value of 1/ 40 = 0.025 for the fading per frame. In the remaining 40 frames of their short lives the particles can still continue to exist - as particles of black ash. The ± parameter relates to the random deviation that makes the whole process more chaotic and so appears more natural. Pulsate How would you create glowworms or a starry sky with some pulsating stars from a particle cloud? If the function is switched on and you also activate the function the color-intensities are alternately faded out and then in again to the old value. .topic 70 Select the "Collision" tab in the particle system dialog to bring to the front the property sheet with all parameters for collision tests. Switching on Collision-test The collision-test is switched on if particle-objects should interact with the area - for example, bounce off the ground or walls, or remain lying on objects. However, to speed up the collision-calculations, only the objects for which the option "Particles - Reflector" in the material editor is switched on are considered. No collision-tests are executed between the particles. Simple or extended collision-test In the selection box can be decided if a simple or complex collision-test is used. The simple collision-test is for very small particle-objects, i.e. usually a simple small triangle, or perhaps for a fast animation preview. During the simple collision-test only the focus of a particle is examined on collision with other objects. With the extended collision-test all points of the particle are covered by the calculation. Particle rebound If a particle meets an object, the force of the rebound depends upon elasticity. Drop a marble onto a hard floor, and it rebounds to almost exactly the same height. A completely non-elastic lump of clay would, instead, remain on the ground. With the Rebound parameter you decide how the particles will respond on collision. A value of 0 signifies a total lack of elasticity, so that the particles remain down on the objects. A value of 1 corresponded to an ideal elasticity. If a particle with a Rebound value of 1 falls vertically to the ground it rebounds to its original position forever more, unless somehow its movement-energy is lost. Realistic values lie generally anywhere between 0 and 1. However, values greater than 1 are permitted. In this case, particles meeting object-walls will rebound with even greater speed (trampoline-effect). .topic 300 There follows a specification of all new functions and expansions to the new version of CyberMotion 3D-Designer 9.0 program. Detailed descriptions of the functions can be found in the relevant chapters. The elimination of minor errors and repairs are not mentioned here. New features in version 9.0: Global Illumination using Photon Mapping Raytracing is a standard for high picture quality and for realistic reflections and refraction. One of the major drawbacks in a general raytracing implementation is that it does not take into account the indirect illumination - the light that is reflected from other objects in the scene other than the direct light from a light source. Usually a constant light intensity can be defined to simulate this indirect lighting but that is a very poor approximation. Especially in architectural scenes the illumination in a room is dominated by indirect light reflected many times from the diffuse surfaces in a building. Now, with the newly implemented photon mapping algorithm, CyberMotion provides a global illumination model that combines the pros of raytracing - reflection and refraction - with the ability to render also the indirect illumination caused by diffuse reflections in the scene. Rendering a picture with photon mapping is a two pass procedure. In a preliminary run little packages of energy (photons) are emitted from the light sources in the scene. Similar to ordinary raytracing the path of this photons is traced through the scene and just like in raytracing photons are reflected from specular surfaces and refracted in transparent objects. But each time a photon hits a diffuse surface, the position and properties of the photon are stored in a 3-dimensional data structure called the photon map. Simultaneously a diffuse reflection is calculated and the diffuse reflected photon continues its way through the scene until it is absorbed in the scene or lost in space. After the photon map has been calculated the picture is rendered in an ordinary raytracing run and the photon map is evaluated when calculating the incoming light intensity for a point. There are more advantages of photon mapping: - Color Bleeding - e.g. when a green wall casts greenish reflections on a neighbouring white wall. - Caustics: Caustics are light reflections from highly specular surfaces or, e.g., the light gathered in a focal point after transmission through a glass lens. - Photon maps can contain several millions of photons - so a high amount of RAM memory (at least 128MB) is a priority when testing photon mapping. - You can apply photon mapping in two different ways: 1. Photon mapping only for indirect illumination in combination with the conventional direct light calculations. 2. Only photon mapping as a global illumination model. All the light in the scene is calculated by evaluating the photon map. - CyberMotion generates two different photon maps - one for the diffuse indirect illumination and one for the light reflections due to caustic reflections. This is due to the fact that caustic reflections usually need a lot of more photons to render sharp contours, e.g. when light is refracted through a glass and focused in a sharp point. - You can specify for each light source, wether it contributes to the photon mapping or if it is interpreted only as a direct light. If, e.g., insignificant background lights are involved, you can exclude them from the photon mapping process without reducing the quality of the picture. - Static photon map for animations. The photon map will be calculated only once at the beginning of an animation. You can use this option for Fly Throughs in architectural scenes where the objects themselves do not move. - You can exclude individual objects from the photon mapping. This objects will be rendered using conventional direct lighting. There are 2 possible uses: 1. For small tiny objects that are not hit by enough photons to shade them correctly 2. For moving objects in animations with static photon maps. Thus you can use a static photon map for the architectural environment and conventional direct lighting for moving objects like, for instance, cars on a street in a city. Light and Fire Area Lights - Each object can be interpreted now as an area light. In the material dialog there is a new property called "Area Light". When you activate this option an object changes to an area light object. The amount of light/shadow feelers to scan an area light is determined by the amount of points the object consists of and the more regularly the object the better the result. NURBS-patches are an ideal way to realize an area light. They have a regular point interval and can be easily inserted as flat panels in walls. A great benefit also - you can change the point-resolution of a NURBS patch anytime, thus reducing the number of points for preview pictures and increasing it again when rendering the final picture. Anyway, the number of light- and shadow sensors is limited to a maximum of 200, if there are more points in an object, points will be picked randomly from the object. Apart from NURBS-patches you can use any object for an area light, even analytical objects or objects with the property "Point=Sphere" switched on. That would result in a cluster of light spheres. Once you set the area light property for an object in the material dialog it is also listed in the light dialog. In the light dialog the object is managed as a normal light object with all familiar parameters belonging to a light source. You can edit and animate light colors and intensity there. The light color is different from the material color. This is due to the fact that area light objects are interpreted as a hull for an inner light source and the hull can have any material you like, for instance, a diffuse plastic light panel or a simple glass sphere. Only then the light color is added and makes the light object shine. In an animation you would increase the light color slowly from a dark color to a bright color to let the object light up. Last but not least an area light object also contributes to the photon map when photon mapping is applied. Therefore all relevant parameters for photon mapping are provided for area lights to. Light Intensity - The intensity fall off of light objects is calculated now exponential instead by a constant linear value. This results in a faster light intensity fall off which comes nearer to the real world conditions. This change will also help to coordinate the two different illumination modes - direct lighting and photon mapping - to match in their intensity levels. This will enable you to switch from raytracing to photon mapping and vice versa without having always to adjust the intensities again. Differences can be smoothed by using the Intensity Correction parameters provided for each light source. Shadow Sensors - the number of shadow sensors can be input now for each light object separately (up to 97). Additional shadow sensors are only calculated for Lamps and Spots with a certain radius. There is no Shadow Dispersion-value anymore, since the amount of distraction for every shadow sensor is calculated from the light circle given by the light radius. Volumetric Fire - with the new Volumetric Fire object almost all kinds of fire can be simulated, beginning with smooth burning candle flames up to vividly burning torches, camp fires or blazing seas of flames. Volumetric Fire is confined to a cylindrical bounding box with an additional lamp object fixed to it. Volumetric Fire objects are created within the Light Dialog. Since a lamp object is subordinated automatically to the fire cylinder, all parameters for lamp lights can be edited when a lamp belonging to a fire object is selected. Additionally to the lamp details, all parameters forming the Volumetric Fire are displayed in the Light Dialog. Fundamentally, Volumetric Fire is calculated similar to Volumetric Fog, applying a ray marching algorithm that takes samples of fire density along the path through the fire cylinder, so most of the parameters describing the fire are similar to the Volumetric Fog parameters. Additional parameters define the color palette of the fire, the shaping within the cylinder, the turbulent flow and the flickering (shifting of lamp position and intensity in an animation) of the flame. Backgrounds Atmosphere - "Sky", "Sky & Clouds", "Fog Linear" and "Fog Turbulence" were integrated into one single "Atmosphere" entry. 4 separate register tabs are provided to control the background color range, the cloud formation, the widely extended fog functions and the atmospheric filter functions. Many improvements have been made and thus former project files may have to be adjusted slightly, but of course all projects will be converted automatically to suitable atmosphere settings as far as possible. Sky Colors - Additional color range mode for backgrounds. Apart from the color range gradient from zenith to horizon you can apply now also a sun centered color range. The colors will run from the sun concentrically around the world sphere. This enables more realistic sunsets with bright (reddened) colors around the sun and darker (blue) shadings on the opposite direction of the sun. So there is no need anymore for animating the background colors when you only want to animate an all around pan with the camera. Atmospheric Filter - As a light ray traverses an atmosphere some light is extinguished and some light may be added by emission and scattering. (Atmosphere means now fog switched on in CyberMotion). This yields in a change in color with distance, i.e., dark backgrounds becoming bluer (sky, additive component) and light backdrops redder (filter component). On the Filter tab you can switch on now both filter types with corresponding color values and intensities. Don't hesitate to experiment with this new filters (mind to switch on fog also), they can largely improve the realism of outdoor scenes and reduce the fiddling to find proper background color ranges. With proper settings for the atmospheric filters and fog you could even simulate a blue sky with a red sunset with nothing more than a single dark background color. Fog - There are 2 optional fog modes, an atmospheric fog that is coordinated with the atmospheric filter functions and should be activated in every outdoor scene and an additional ground fog. Both fog modes are provided now with two height parameters, ground height and overall height. This is due to the fact that fog is now getting thinner with increasing height (underneath ground height full density is applied). As ground fog is for heavy layers of thick fog gathering above the ground the thinning out of ground fog can be switched of optionally. - Fog is calculated now with exponential increasing density. - Fog can be illuminated - this means the fog color is calculated from the incidence light from parallel light sources (suns). - Volumetric Ground Fog - You can switch on a volumetric function to render the ground fog. By doing this not only a layer of uniform fog is applied but a real whirling and cloudy media is calculated. This is done by tracing the viewing ray through the foggy media and taking many samples of fog densities and illuminations along its path - rendering time will increase accordingly. Another advantage of volumetric fog is the possibility to render realistic animations of swirling fog rising from the ground and driven forth by the wind. Even the illumination of point light sources and spot cones will become clearly visible in volumetric fog, as the incoming light is sampled also when tracing through the fog media. Materials Noise Normal Distortion - The noise function has a new Scale parameter for the general frequency of the noise pattern, independent from the separate values for the x, y and z-axis. The algorithm was improved considerably (on cost of rendering time), so that especially landscape textures will benefit from it. You can also switch on the B-Spline-function to further improve the normal distortion (again with increasing rendering time cost). Procedural Textures - New option "Texture Color = Material Color" for all texture patterns. This deactivates the color mixing so that only normal distortion will be visible. You can use it to simply check the normal distortion without disturbing interference with the color pattern or just to create a simple tiled texture, e.g., with a single plain color but the normal distortion of the invisible block texture. Landscape Texture Layers: - additional option to apply a texture layer with a spotted appearance. Only patches of the texture layer will show on the ground, mingling with underlying layers resulting in a more complex appearance. - normal distortion and random application improved, landscape layers have a much more realistic appearance. Water - Wave algorithm improved with a smoother flow of water and more turbulent currents. and... Additional "Dimension"-box in "Scale Object" work mode. You can directly define the dimensions of the object via the width, height, and depth of the objects bounding box, in world space and object-axis mode. This can also applied to analytical objects (corresponding axes will be adjusted automatically). Drop Selection - In "Move Object" work mode you can drop objects to the ground. If an object hovers above another object it will land on it, if not, nothing will happen. You can drop selected objects or hierarchies as a whole object or let each object/hierarchy fall down on it's own path. To support the construction of objects bitmaps can be displayed now in the Viewport windows. There are 2 new menu entries in the "View" menu bar available to switch on a bitmap for a Viewport and to select a bitmap via the file selector box. Dialog parameters that will generate automatically a keyframe when changed are painted with a light blue background to indicate that these parameters can be animated. Object Selection Dialog - You can select now several objects in one go with your mouse dragging up a frame in the selection window. Then a Popup-selection will appear where you can decide whether to switch on/off or copy/delete the objects. The Mousewheel can be used now to scroll up and down in object selection lists, library windows and within the render window. In "View - Normals" viewing mode only normals of selected objects or facets are drawn. The default parameters for generating landscapes and atmospheres were adjusted for better coordination with fog, color filters and waves. Landscape Editor - a suitable camera position for the generated landscape will assigned automatically and the background switched on if the corresponding option is activated on generation. Viewport Zoom - Additional zoom steps from 4% down to 1% has been added to provide an overall view when working with large scale landscape objects. Previous versions: Expansions to CyberMotion 3D-Designer version 8.0 .topic 98 There follows a specification of all new functions and expansions to the new version of CyberMotion 3D-Designer 8.0 program. Detailed descriptions of the functions can be found in the relevant chapters. The elimination of minor errors and repairs are not mentioned here. New features in version 8.0: compact project files - version 8.0 project files are about 50% more compact, thus enabling faster access and up to 30-times faster UNDO/REDO-functions. Of course version 8.0 is upwards compatible so that older files can be used with it but projects saved in version 8.0 cannot be used in previous versions. Landscapes and planets - the somewhat old fashioned "Fractal Object" dialog was replaced by a thoroughly new design for the landscape editor. The landscape editor provides a large preview window with a shaded plan view and many functions and filters (crater, terrace, etc.) for the basic generation and editing of the height fields. By means of special painting tools you can directly "draw" on the preview window - to elevate or lower the ground or to smooth eroded slopes, for instance. In planet mode the landscape net is wrapped around a sphere to create highly detailed meteors or planets. The landscape editor can generate nets of up to 2 million facets, but this has to be handled carefully. A minimum of 256MB RAM should be available when topping a million facets, otherwise the constant outsourcing of memory to the hard disk will virtually break down the whole process. However, there is no need at all to generate such large nets because the new landscape objects comes with new multi-layered procedural textures to provide the necessary details. Finally, a visual landscape library facilitates the management of existing and new landscape patterns. Background, clouds and atmospheres - The background dialog has been improved too, now providing a host of new functions for much more complex atmosphere and cloud models: - new multi-layered cloud presentation - additional light mode "parallel light = sun" interprets a parallel light source as a sun object that illuminates clouds and is rendered in the atmospheric background of the scene. - the three background colors are replaced by a freely editable color range. Color ranges can be managed in a visual library. - cloud turbulence can be animated, both in velocity and in wind direction. - the atmospheric haze (now with separate color) creeps up in the sky, covering the landscape as well as the low heights at the horizon. Even the sun will be overlaid by the haze, thus creating atmospheric sunsets. - complete preview rendering in the dialog providing various grades of quality (scanline, raytracing with and without shadows and antialiasing), two resolutions and several arrangements for the viewing modes (background only, background with plane or complete scene in a panoramic or camera view). You can create a new plane object directly in the background dialog or copy the standard panoramic camera view to the current scene camera. - you can load backgrounds from a visual background library or save your own creations to the library. - you can switch on the background now in the background dialog as well as via the object selection dialog. Light dialog - the dialog now features: - also a great preview window with all options described already for the background preview window. You can switch now between the familiar centered lens flare preview to a camera scene preview. With it changes in the light settings are shown immediately in the preview window and adjustments can be made in an instant. The automatic redraw of the preview window can be switched off for complex scenes and preview renderings can be canceled any time by pressing the "ESC" key (if, for example, you forgot to select a lower quality mode when rendering complex landscapes with shadows and antialiasing in the preview window). - light objects can be switched on and off in the light dialog as well as in the object selection dialog. - the volumetric light effect can be switched on and off separately for each individual spot light in the dialog. - visible light objects are described now by an inner radius as well as an outer halo radius. - all lens flare effects now also available for the parallel light object. - "Sun" mode for parallel light sources. In combination with a clouded sky background the sun illuminates the clouds. The sun is always rendered as a visible disc in the background behind the clouds and its intensity is filtered by the clouds and the atmospheric haze. Furthermore the color range used to produce the sky background can be distorted in the area around the sun thus creating more realistic sky illuminations around the sun. - now, a single global intensity parameter controls the intensity of the whole lens flare, with the exception of the lens flare spot intensity. - an additional global scale parameter allows the adjustment of the global size of the lens flare effect once the general shape has been edited via the various effect parameters. - the standard parallel light object created on startup for every new scene is automatically set as a sun object. You should switch off this option if you do not intend to produce an outdoor scene. Material dialog - extensive improvements have been implemented: - the new preview window offers additional primitive objects as well as a scene preview and a centered preview of the currently selected object. - a visual material library facilitates loading from and saving to the comprehensive material library. - additional specular reflection color. The diffuse reflection determines the basic color of an object. Diffuse stands for a regular dull reflection of light, the object appears to be of a constant tone from all views. In contrast to the diffuse reflection the specular reflection represents the amount of mirrored light, i.e. highlights from light sources or mirrored objects in the scene. By specifying a particular color for the specular reflection, a filter function is applied for the mirrored light. With metal surfaces the specular color usually comes close to the diffuse color, resulting, for instance, in golden reflections on a golden surface. Other surfaces reflect the whole light spectrum, resulting, for instance, in white highlights on plastic or varnished surfaces. - the flow direction of waves can be input. - block texture - the "Row Offset" shifts every second block row by a given sideways offset, thus giving the impression of a brick structure. - new projection modes for cylindrical bitmap projection similar to plane projection. - new implementation of the procedural texture pattern distortion. You can choose between linear (faster) or B-Spline-interpolation and define the number of iterations for the fractal distortion. - global scaling of the procedural texture pattern can be applied directly in the material editor as well as via the scale texture function in the viewport windows. - normal distortion now with two additional random distortion modes - "Random" (a random noise distortion ignoring the texture pattern) and "Random & Texture" (a combination of pattern controlled and randomly distortion). You can specify a separate distortion value for each of the 3 object texture axes. - new color range texture. You can define a color range that either covers the whole object or runs repeatedly above the object, similar to colorful stripes. This is an ideal texture for sedimentary rock materials. - new fractal noise texture - the basic material setting for rock or landscape textures. - new "terrain" page in the material dialog provides up to three additional landscape texture layers. These texture layers are again based upon the fractal noise texture and are applied dependent on the height and the angle of the slope in a terrain. Thus, for instance, you can apply a ground texture up to a specific height, overlay it with a grass layer and finally add a snow texture layer with the snow gathering only at great heights and on gentle slopes. Now, if you scale an object symmetrically, the procedural textures and bitmaps assigned to it are scaled automatically with the object. Starfield - the starfield effect can be switched on or off in the starfield editor as well as in the Render Options dialog. Depending on the picture resolution, the total number of stars will be reduced to prevent the stars from flooding the picture in low resolutions. Antialiasing - It is only those pixels that deviate in their color-value by a threshold value from that of their neighbors that are subdivided into smaller pixels. Now, this threshold value can be set in the raytracing box. It is also possible to set this value to zero, thus every single pixel will be subdivided into sub-pixels - but rendering time increases considerably. All angle instruments (light angle of incidence, camera angles, flow directions, etc.) can be set now directly with the mouse simply by clicking into the instrument and dragging the needle to the desired position. The automatic creation of circular and elliptical templates for the sweep editor was supplemented with an additional X-Offset parameter. With it the shape of the circular template is shifted along the x-axis, thus creating a template for a torus object. The reference-point of each rotation or scaling can be input now exactly with the keyboard. The Undo/Redo functions are available now by the standard key functions "Ctrl-Z" (UNDO) and "Ctrl-Y" (REDO). .topic 13 About CyberMotion 3D-Designer - The program information is covered here. .topic 97 mailto:support@3d-designer.com - Choose this menu entry to start your standard e-mail client with automatically filled in addressee. .topic 95 www.3d-designer.com - Choose this menu entry to direct your internet browser to the CyberMotion 3D-Designer homepage. .topic 10 Info - A dialog box appears, giving information about the number of objects and points used and still available. .topic 55 New - By choosing this menu entry, all objects in the memory are deleted and the settings for camera, background and illumination are reset to the initial parameters, as found at the start of the program. .topic 7 Show Last Rendered Picture/Animation - When the render window has been closed, minimized or lies under the main window you can easily open or bring it to the top again by selecting this function. The content of the render window will be restored, so closing the render window will not delete a rendered picture or animation. .topic 9 Quit - Leave program. .topic 16 Zoom - You can readily switch between different scales by means of the zoom-selection box - which is located in the button bar directly above the depiction windows - thus influencing the picture detail visible in the viewports. Preset is always 100%. You can, however, select any value between 1% and 4000%. The enlargement is only to ease work while moving, rotating or scaling objects in viewport windows. The settings for camera zoom are independent of this scale value. .topic 64 Show as box - A dotted box is always drawn about objects you have selected that you want to move, scale, or turn with the mouse. If, for example, you now want to move an object, then the object can be depicted in two ways during the action: 1. Show as box (active)- If the menu-entry "Show as box" has been activated in the "Edit" menu-strip, then only the dotted box is moved during the action. A re-draw of the complete scene is caused if you release the mouse-button. 2. Show as box (not active)- If the menu-entry "Show as box" is not selected, each action is represented in real-time. That is, the scene is constantly updated and redrawn during the action. If you are working on a complex scene or your computer does not have the necessary performance, you can use the "Show as box" mode. This saves unnecessary "Picture calculations" and enables you to work more fluidly on complex scenes. .topic 260 Shareware programs are not free. "Shareware" is the accepted word, but "Trialware" would be a better name for it. You are usually given a period of time to use and evaluate the program to help you decide wether or not you wish to purchase the program. After you are done with the trial period, you must either pay for the program (often called registering) or remove it from your computer. Please support the authors of good shareware programs (and their families) by paying this and other shareware programs! .topic 26 Back to Main Menu - This entry returns you to the main menu. Drawings that have not yet been finalized are not lost. They are automatically drawn again as soon as you return to the respective extrude/sweep editor. .topic 24 Load Template - To load an object template into the program from a file, you must first delete any template that has already been started. Then choose the "Load Template...." entry. The file selection box appears and allows you to make your selection. The file suffix for object templates is "TMP". .topic 23 Save template - If you have drawn a template that you may want to use again, choose the "Save Template..." entry. The file selection box appears and in it you can give your template an appropriate name. The file suffix for templates is "TMP". On confirmation, the file is saved and you can continue your work. .topic 77 CyberMotion 3D-Designer Copyright © 1995-2005, Reinhard Epp Software For technical support, questions or criticism contact: Reinhard Epp Software Donauschwabenstr. 75 A 33609 Bielefeld Germany mailto: support@3d-designer.com http://www.3d-designer.com The English manual was translated by: John Ridgway ILLUSTRATION AND DESIGN 1 Taunton Drive Farnworth, Bolton Manchester UK BL4 ONG mailto: john@ridgwaydesign.fsnet.co.uk Special thanks to John Ridgway for translation help and to all users helping with their beta testing and useful suggestions on features and usability. CyberMotion 3D-Designer was programmed by Reinhard Epp using GFA BASIC 32 CyberMotion is a trademark of Reinhard Epp Software. All other products mentioned are registered trademarks or trademarks of their respective companies.